CN104199556B - A kind of information processing method and device - Google Patents

A kind of information processing method and device Download PDF

Info

Publication number
CN104199556B
CN104199556B CN201410486659.2A CN201410486659A CN104199556B CN 104199556 B CN104199556 B CN 104199556B CN 201410486659 A CN201410486659 A CN 201410486659A CN 104199556 B CN104199556 B CN 104199556B
Authority
CN
China
Prior art keywords
distance
action
virtual plane
display
operation planar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410486659.2A
Other languages
Chinese (zh)
Other versions
CN104199556A (en
Inventor
温泽中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201410486659.2A priority Critical patent/CN104199556B/en
Publication of CN104199556A publication Critical patent/CN104199556A/en
Application granted granted Critical
Publication of CN104199556B publication Critical patent/CN104199556B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The present invention, which provides a kind of information processing method and device, wherein information processing method, to be included:Display content is shown using wear-type Display Technique;The virtual plane where the display is obtained in a first direction according to the first distance of the first reference planes;Pass through the first action of the operating body of described image harvester collection in a first direction;The operation planar at the first action place is obtained in a first direction according to the second distance of first reference planes;According to first distance and the second distance, the position relationship between the operation planar where first action and the virtual plane where the display is determined, to complete the collision detection between operating body and virtual plane.For prior art, the embodiment of the present invention simplifies the mode of collision detection by way of the first distance and second distance carry out collision detection, and the data volume used in collision detection process reduces, so as to improving detection efficiency.

Description

A kind of information processing method and device
Technical field
The present invention relates to identification technology field, more particularly to a kind of information processing method and device.
Background technology
Augmented reality (Augmented Reality, AR) technology is the new technology to grow up on the basis of virtual reality, It can generate dummy object by computer system, and the dummy object of generation is added in real scene, so as to realize " enhancing " to reality.
At present the computer system technology that uses of generation dummy object for:Three-dimensional (Dimensions, D) model construction skill Art, the dummy object shown with 3D models is constructed by 3D model construction techniques, is illustrated in real scene.And work as user When finger touches the dummy object, computer system also can build the 3D moulds of user's finger according to user's finger in the position in space Type.If the 3D models of user's finger and the 3D models of dummy object spatially have joining, show that user's finger touches Dummy object, realize user's finger and the collision detection of dummy object.
But whether collided above by 3D model construction techniques detection user's finger and dummy object, it is necessary to build user The 3D models of finger and dummy object, then again by judging whether two 3D models spatially have joining to determine user's hand Refer to and whether dummy object collides, this mode can be because the complexity of 3D model construction techniques improves the complexity of collision detection Degree, and because data volume needed for the structure efficiency of 3D model construction techniques in itself and structure causes the efficiency of collision detection to drop It is low.
The content of the invention
In view of this, the embodiment of the present invention provides the information processing method and device being applied in wearable electronic equipment, Simplify the mode of collision detection, improve detection efficiency.
To achieve the above object, the present invention provides following technical scheme:
The embodiment of the present invention provides a kind of information processing method, applied to wearable electronic equipment, the wearable electronics Equipment includes image collecting device, and described information processing method includes:
Display content is shown using wear-type Display Technique;
Virtual plane where the display is obtained in a first direction according to the first distance of the first reference planes, described The diameter parallel of the axis of one reference planes in a first direction and described image harvester in the first direction;
Pass through the first action of the operating body of described image harvester collection in a first direction;
Obtain it is described first action where operation planar in a first direction according to first reference planes second away from From;
According to first distance and the second distance, the operation planar where determining first action shows with described The position relationship between virtual plane where showing.
Preferably, when it is determined that the operation planar where first action is the virtual plane where the display, institute Stating method also includes:
The first image when the operating body performs first action is obtained by described image harvester;
Application image identification technology identify described first image, obtain the operating body perform it is described first action when The position formed on the virtual plane;
The position formed during based on the described first action on the virtual plane, obtains and performs corresponding with the first action First instruction;
The display content obtained after first instruction will be performed including described virtual flat using wear-type Display Technique On face.
Preferably, the operation planar at the first action place is obtained in a first direction according to first reference planes Second distance includes:
The focal distance relative to the operation planar obtained using the Autofocus Technology of described image harvester For the second distance.
Preferably, methods described also includes:Described image harvester is obtained in said first direction according to described first 3rd distance of reference planes;
Described image harvester is obtained in said first direction according to the 4th distance of the operation planar, the described 4th Distance is the focal distance relative to the virtual plane obtained using the Autofocus Technology of described image harvester;
4th distance and the described 3rd apart from sum is the second distance, or the 4th distance and described Three apart from its difference be the second distance.
Preferably, according to first distance and the second distance, the operation planar where first action is determined With the position relationship between the virtual plane where the display, including:
First distance and the second distance are compared, obtain comparison result;
When the comparison result shows that first distance is identical with the second distance, the first action institute is determined Operation planar be the display where virtual plane.
The embodiment of the present invention also provides a kind of information processor, applied to wearable electronic equipment, the wearable electricity Sub- equipment includes image collecting device, and described information processing unit includes:
Display unit, for being shown using wear-type Display Technique to display content;
First acquisition unit, for obtaining the virtual plane where the display in a first direction according to the first reference planes The first distance, the axis of first reference planes in a first direction and described image harvester are in the first direction Diameter parallel;
Collecting unit, the first action for the operating body by the collection of described image harvester in a first direction;
Second acquisition unit, for obtaining the operation planar at the first action place in a first direction according to described first The second distance of reference planes;
Determining unit, for according to first distance and the second distance, determining the behaviour where first action Make the position relationship between the virtual plane where plane and the display.
Preferably, described device also includes:
3rd acquiring unit, when performing first action for obtaining the operating body by described image harvester The first image;
Recognition unit, described first image is identified for application image identification technology, obtain the operating body and performing institute The position formed when stating the first action on the virtual plane;
Execution unit, the position that is formed on virtual plane during for based on the described first action, obtains and performs The first instruction corresponding with the first action;
The display unit, which is also used for wear-type Display Technique, to be performed in the display obtained after first instruction Appearance is shown on the virtual plane.
Preferably, the second acquisition unit obtains the operation planar at the first action place in a first direction according to institute Stating the second distance of the first reference planes includes:Using the Autofocus Technology of described image harvester obtain relative to institute The focal distance for stating operation planar is the second distance.
Preferably, the second acquisition unit obtains the operation planar at the first action place in a first direction according to institute Stating the second distance of the first reference planes includes:Described image harvester is obtained in said first direction according to the described first ginseng Examine the 3rd distance of plane and obtain described image harvester in said first direction according to the 4th of the operation planar the Distance, using the 4th distance and the described 3rd apart from sum as the second distance, or by the 4th distance and institute The 3rd is stated apart from its difference as the second distance, the 4th distance is the auto-focusing skill using described image harvester The focal distance relative to the virtual plane that art obtains.
Preferably, the determining unit determines the first action institute according to first distance and the second distance Operation planar and the display where virtual plane between position relationship, including:
First distance and the second distance are compared, obtain comparison result;
When the comparison result shows that first distance is identical with the second distance, the first action institute is determined Operation planar be the display where virtual plane.
It can be seen from the above technical scheme that information processing method provided in an embodiment of the present invention and device, can be first First obtain virtual plane the first distance according to the first reference planes and the action of acquisition first in a first direction where display The operation planar at place is then true according to the first distance and second distance on the first plane according to the second distance of the first reference planes Position relationship between the operation planar at fixed first action place and the virtual plane for showing place, operating body is completed with virtually putting down Collision detection between face.For prior art, the embodiment of the present invention is touched by the first distance and second distance The mode for hitting detection simplifies the mode of collision detection, and the data volume that is used in collision detection process reduce, so as to To improve detection efficiency.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can be with Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is a kind of flow chart of information processing method provided in an embodiment of the present invention;
Fig. 2 is a kind of structural representation of wearable electronic equipment provided in an embodiment of the present invention;
Fig. 3 is another structural representation of wearable electronic equipment provided in an embodiment of the present invention;
Fig. 4 is another flow chart of information processing method provided in an embodiment of the present invention;
Fig. 5 is a kind of schematic diagram of virtual interface provided in an embodiment of the present invention;
Fig. 6 is a kind of structural representation of information processor provided in an embodiment of the present invention;
Fig. 7 is another structural representation of information processor provided in an embodiment of the present invention.
Embodiment
The central idea of information processing method and device provided in an embodiment of the present invention is:Replaced by range estimation mode existing Have by the way of 3D models carry out collision detection, reduce the data volume in collision detection process to improve detection efficiency.
In order that those skilled in the art more fully understand the present invention, below in conjunction with the accompanying drawing in the embodiment of the present invention, Technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is only the present invention Part of the embodiment, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art are not having The every other embodiment obtained under the premise of creative work is made, belongs to the scope of protection of the invention.
Referring to Fig. 1, it illustrates a kind of flow chart of information processing method provided in an embodiment of the present invention, in the present invention In embodiment, information processing method can apply in wearable electronic equipment, and wearable electronic equipment includes IMAQ Device, image collecting device are used to gather the image on first direction.
As shown in Fig. 2 wearable electronic equipment can be a wearable glasses, it is provided with a temple 1 of glasses Image collecting device 2 (such as camera), using the front that human eye is faced as first direction, gathered and be located at by image collecting device 2 The image in the front that human eye is faced.The information processing method being wherein applied to shown in Fig. 1 in above-mentioned wearable electronic equipment can To comprise the following steps:
101:Display content is shown using wear-type Display Technique.Wherein wear-type Display Technique can utilize light The principle of reflection is learned, by the virtual plane of display content projection in a first direction, the height of the virtual plane can be with people Eye is parallel, in order to which human eye consults display content, as shown in Figure 3.Fig. 3 is that the wearable glasses shown in Fig. 2 are shown using wear-type Show that technology includes display content in the virtual plane positioned at the front that human eye is faced.
In order to use wear-type Display Technique, a HUD (Head can also be installed on the wearable glasses shown in Fig. 2 Up Display, head-up display), as shown in Figure 3.The HUD3 can be shown using wear-type Display Technique to display content Show, and the diameter parallel of the axis of the HUD3 and image collecting device 2.
102:Virtual plane where obtaining display is joined according to the first distance of the first reference planes, first in a first direction Examine the axis of plane in a first direction and the diameter parallel of image collecting device in a first direction.
In embodiments of the present invention, the first reference planes are the plane of references for obtaining the first distance and second distance, its Plane where being the minute surface of wearable glasses shown in Fig. 2, specifically chooses which plane may be referred to HUD3 in wearable electricity Setting in sub- equipment.
Such as when plane where HUD3 focus is located at the minute surface of wearable glasses shown in Fig. 2, then can be directly by mirror Plane is as the first reference planes where face;When HUD3 focus is not in plane where the minute surface of wearable glasses shown in Fig. 2, Then using plane where HUD3 focus as the first reference planes, plane is parallel to plane where minute surface where the focus.
So when plane where HUD3 focus is the first reference planes, the first distance can be shown using wear-type Show the focal length of the virtual plane formed during technology, the focal length is the distance of the focus from the photocentre of lens to light aggregation, in focal length Opening position can form virtual plane, therefore focal length used can conduct when the wear-type Display Technique that uses forms virtual plane First distance.When being shown using HUD3 using wear-type Display Technique, the first distance is then set when designing the HUD3 Focal length.
If plane where HUD3 focus is not the first reference planes, HUD3 focus and the first ginseng can be obtained first The 5th distance between plane is examined, then the position relationship according to plane and the first reference planes where HUD3 focus, it is determined that Calculation between 5th distance and focal length obtains the first distance.Can be specifically:
In a first direction, when plane is between the first reference planes and virtual plane where HUD3 focus, first Distance is the 5th distance and focal length sum;When where the first reference planes being located at HUD3 focus between plane and virtual plane, First distance is the 5th distance and the difference of focal length.
103:Pass through the first action of the operating body of image acquisition device in a first direction.
104:The operation planar at the first action place is obtained in a first direction according to the second distance of the first reference planes.
In embodiments of the present invention, operation planar can be operating body plane where operating point when performing the first action, Plane is parallel to the first reference planes where the operating point.Can be captured by the Autofocus Technology of image collecting device The operating point of one action, and obtain when can obtain using Autofocus Technology and capture the operating point flat relative to operation The focal distance in face.So when the focus of image collecting device is located in the first reference planes, the focusing of image collecting device Distance is then second distance.
If the focus of image collecting device is not located in the first reference planes, acquisition image collecting device first is needed to exist According to the 3rd distance and image collecting device of the first reference planes in a first direction according to the of operation planar on first direction Four distances, then second distance is obtained according to the 3rd distance and the 4th distance, can be specifically:
In a first direction, if plane where the focus of image collecting device be located at the first reference planes and operation planar it Between, then second distance is the 3rd distance and the 4th apart from sum;If the first reference planes are located at the focus institute of image collecting device Between plane and operation planar, then second distance is the 4th distance and the 3rd apart from its difference.
105:According to the first distance and second distance, the virtual of operation planar where the first action and display place is determined Position relationship between plane.
It is virtual flat where determining that the operation planar where the first action is display according to the first distance and second distance During face, show that the first action is directly operated on virtual plane, now can be determined that operating body collides with virtual plane;When It is not when showing the virtual plane at place, to show that the operation planar where the first action is determined according to the first distance and second distance First action is not operated on virtual plane directly, now can be determined that operating body does not collide with virtual plane.
Wherein the virtual of operation planar where the first action and display place is determined according to the first distance and second distance The process of position relationship between plane can be:When the difference of the first distance and the distance of second distance is in the first preset range When, the operation planar where determining the first action is the virtual plane where display;When the first distance and the distance of second distance Difference not in the first preset range, the operation planar where determining the first action be not show where virtual plane.
Or the first distance and second distance are compared, obtain comparison result;When comparison result shows the first distance When identical with second distance, the operation planar where determining the first action is the virtual plane where display;When comparing result table When bright first distance and second distance differ, the operation planar where determining the first action is not virtual flat where showing Face.
It can be seen from the above technical scheme that information processing method provided in an embodiment of the present invention can obtain first it is aobvious Virtual plane where showing is in a first direction according to the behaviour where the first distance of the first reference planes and the action of acquisition first Make plane on the first plane according to the second distance of the first reference planes, then determine that first is dynamic according to the first distance and second distance The position relationship between the virtual plane where operation planar and display where making, is completed between operating body and virtual plane Collision detection.For prior art, the embodiment of the present invention carries out collision detection by the first distance and second distance Mode simplifies the mode of collision detection, and the data volume used in collision detection process is reduced, examined so as to improve Survey efficiency.
Referring to Fig. 4, it illustrates another flow chart of information processing method provided in an embodiment of the present invention, elaborate When it is determined that operation planar where the first action is the virtual plane where display, the first action phase how is performed to display content Corresponding first instruction, may comprise steps of:
101:Display content is shown using wear-type Display Technique.
102:Virtual plane where obtaining display is joined according to the first distance of the first reference planes, first in a first direction Examine the axis of plane in a first direction and the diameter parallel of image collecting device in a first direction.
103:Pass through the first action of the operating body of image acquisition device in a first direction.
104:The operation planar at the first action place is obtained in a first direction according to the second distance of the first reference planes.
105:According to the first distance and second distance, the virtual of operation planar where the first action and display place is determined Position relationship between plane
106:When it is determined that the operation planar where the first action is the virtual plane where display, pass through image collector Put the first image when obtaining operating body the first action of execution.
When it is determined that the operation planar where the first action is the virtual plane where display, show that the first action is directly grasped Make in virtual plane, now need to carry out operation corresponding with the first action to the display content shown on virtual plane.
107:Application image identification technology identifies the first image, obtains operating body when performing the first action in virtual plane The position of upper formation.In embodiments of the present invention, the first action of operating body is to some position display content in virtual plane Operation, therefore after the first image is got, it is necessary to application image identification technology identify the first image, by the first image In pixel and coordinate corresponding relation, obtain operating body position for being formed on virtual plane when performing the first action.
108:The position formed during based on the first action on the virtual plane, obtains and performs corresponding with the first action First instruction.
What the position formed when wherein, based on the first action on the virtual plane can determine currently to be operated shows Show content, the first instruction is then performed to identified display content.As shown in figure 5, display has virtual control on virtual plane The display content of part, first action be to some virtual control carry out clicking operation, therefore based on first action when described The position formed on virtual plane, it may be determined that the virtual control of current operation, then further obtain in the first action and virtual The instruction identical first of control is instructed, and the first instruction is performed to the display content currently shown.
109:The display content obtained after the first instruction will be performed including on virtual plane using wear-type Display Technique.
Corresponding with above method embodiment, the embodiment of the present invention also provides a kind of information processor, applied to wearing Formula electronic equipment, and wearable electronic equipment includes image collecting device, the wherein structural representation of information processor 10 such as Shown in Fig. 6, including:Display unit 11, first acquisition unit 12, collecting unit 13, second acquisition unit 14 and determining unit 15.
Display unit 11, for being shown using wear-type Display Technique to display content.Wherein wear-type shows skill Art can utilize the principle of optical reflection, display content be projected on virtual plane in a first direction, the virtual plane Height can be parallel with human eye, in order to which human eye consults display content.In embodiments of the present invention, display unit 11 can pass through HUD in wearable electronic equipment is shown using wear-type Display Technique to display content.
First acquisition unit 12, for obtaining the virtual plane at display place in a first direction according to the first reference planes First distance, the axis of the first reference planes in a first direction and the diameter parallel of image collecting device in a first direction.
In embodiments of the present invention, the first reference planes are the plane of references for obtaining the first distance and second distance, its Plane where being the minute surface of wearable glasses shown in Fig. 2, which plane specifically chosen may be referred to HUD3 in Fig. 2 and dressing Setting in formula electronic equipment.
Such as when plane where HUD3 focus is located at the minute surface of wearable glasses shown in Fig. 2, then can be directly by mirror Plane is as the first reference planes where face;When HUD3 focus is not in plane where the minute surface of wearable glasses shown in Fig. 2, Then using plane where HUD3 focus as the first reference planes, plane is parallel to plane where minute surface where the focus.
So when plane where HUD3 focus is the first reference planes, the first distance can be shown using wear-type Show the focal length of the virtual plane formed during technology, the focal length is the distance of the focus from the photocentre of lens to light aggregation, in focal length Opening position can form virtual plane, therefore focal length used can conduct when the wear-type Display Technique that uses forms virtual plane First distance.When being shown using HUD3 using wear-type Display Technique, the first distance is then set when designing the HUD3 Focal length.
If plane where HUD3 focus is not the first reference planes, HUD3 focus and the first ginseng can be obtained first The 5th distance between plane is examined, then the position relationship according to plane and the first reference planes where HUD3 focus, it is determined that Calculation between 5th distance and focal length obtains the first distance.Can be specifically:
In a first direction, when plane is between the first reference planes and virtual plane where HUD3 focus, first Distance is the 5th distance and focal length sum;When where the first reference planes being located at HUD3 focus between plane and virtual plane, First distance is the 5th distance and the difference of focal length.
Collecting unit 13, the first action for the operating body by image acquisition device in a first direction.
Second acquisition unit 14 is flat according to the first reference in a first direction for the operation planar where obtaining the first action The second distance in face.In embodiments of the present invention, operation planar can be operating body when performing the first action where operating point Plane, plane is parallel to the first reference planes where the operating point.It can be caught by the Autofocus Technology of image collecting device Obtained when grasping the operating point of the first action, and can obtain using Autofocus Technology and capture the operating point relative to The focal distance of operation planar.So when the focus of image collecting device is located in the first reference planes, image collecting device Focal distance be then second distance.
If the focus of image collecting device is not located in the first reference planes, acquisition image collecting device first is needed to exist According to the 3rd distance and image collecting device of the first reference planes in a first direction according to the of operation planar on first direction Four distances, then second distance is obtained according to the 3rd distance and the 4th distance, can be specifically:
In a first direction, if plane where the focus of image collecting device be located at the first reference planes and operation planar it Between, then second distance is the 3rd distance and the 4th apart from sum;If the first reference planes are located at the focus institute of image collecting device Between plane and operation planar, then second distance is the 4th distance and the 3rd apart from its difference.
Determining unit 15, for according to the first distance and second distance, determine the operation planar where the first action with it is aobvious The position relationship between virtual plane where showing.
It is virtual flat where determining that the operation planar where the first action is display according to the first distance and second distance During face, show that the first action is directly operated on virtual plane, now can be determined that operating body collides with virtual plane;When It is not when showing the virtual plane at place, to show that the operation planar where the first action is determined according to the first distance and second distance First action is not operated on virtual plane directly, now can be determined that operating body does not collide with virtual plane.
Wherein determining unit 15 determines the operation planar where the first action and display according to the first distance and second distance The process of position relationship between the virtual plane at place can be:When the difference of the first distance and the distance of second distance is first During preset range, the operation planar where determining the first action is the virtual plane where display;When the first distance and second away from For difference from a distance from not in the first preset range, the operation planar where determining the first action is not virtual flat where showing Face.
Or the first distance and second distance are compared, obtain comparison result;When comparison result shows the first distance When identical with second distance, the operation planar where determining the first action is the virtual plane where display;When comparing result table When bright first distance and second distance differ, the operation planar where determining the first action is not virtual flat where showing Face.
It can be seen from the above technical scheme that information processor provided in an embodiment of the present invention can obtain first it is aobvious Virtual plane where showing is in a first direction according to the behaviour where the first distance of the first reference planes and the action of acquisition first Make plane on the first plane according to the second distance of the first reference planes, then determine that first is dynamic according to the first distance and second distance The position relationship between the virtual plane where operation planar and display where making, is completed between operating body and virtual plane Collision detection.For prior art, the embodiment of the present invention carries out collision detection by the first distance and second distance Mode simplifies the mode of collision detection, and the data volume used in collision detection process is reduced, examined so as to improve Survey efficiency.
Referring to Fig. 7, it illustrates another structural representation of information processor provided in an embodiment of the present invention, On the basis of Fig. 6, in addition to:3rd acquiring unit 16, recognition unit 17 and execution unit 18.Wherein,
3rd acquiring unit 16, for obtaining the first figure when operating body performs the first action by image collecting device Picture.When it is determined that the operation planar where the first action is the virtual plane where display, show that the first action directly operates in Virtual plane, now need to carry out operation corresponding with the first action to the display content shown on virtual plane.
Recognition unit 17, the first image is identified for application image identification technology, obtain operating body and performing the first action When the position that is formed on virtual plane.In embodiments of the present invention, the first action of operating body is to some in virtual plane The operation of position display content, therefore, it is necessary to which application image identification technology the first image of identification, leads to after the first image is got Cross to the pixel and the corresponding relation of coordinate in the first image, obtain operating body shape on virtual plane when performing the first action Into position.
Execution unit 18, the position that is formed on virtual plane during for based on the first action, obtains and performs and first First instruction corresponding to action.
Wherein, the position formed when execution unit 18 acts based on first on the virtual plane can determine current institute The display content to be operated, the first instruction is then performed to identified display content.As shown in figure 5, shown on virtual plane Display content with virtual control, the first action are the clicking operations carried out to some virtual control, therefore dynamic based on first The position formed when making on the virtual plane, it may be determined that the virtual control of current operation, then it is dynamic further to obtain first Instruction identical first in work with virtual control is instructed, and the first instruction is performed to the display content currently shown.Performing the The display content obtained after one instruction can be shown on virtual plane by display unit 11 using wear-type Display Technique.
It should be noted that each embodiment in this specification is described by the way of progressive, each embodiment weight Point explanation is all difference with other embodiment, between each embodiment identical similar part mutually referring to. For device class embodiment, because it is substantially similar to embodiment of the method, so description is fairly simple, related part is joined See the part explanation of embodiment of the method.
Finally, it is to be noted that, herein, such as first and second or the like relational terms be used merely to by One entity or operation make a distinction with another entity or operation, and not necessarily require or imply these entities or operation Between any this actual relation or order be present.Moreover, term " comprising ", "comprising" or its any other variant meaning Covering including for nonexcludability, so that process, method, article or equipment including a series of elements not only include that A little key elements, but also the other element including being not expressly set out, or also include for this process, method, article or The intrinsic key element of equipment.In the absence of more restrictions, the key element limited by sentence "including a ...", is not arranged Except other identical element in the process including the key element, method, article or equipment being also present.
One of ordinary skill in the art will appreciate that realize all or part of flow in above-described embodiment method, being can be with The hardware of correlation is instructed to complete by computer program, described program can be stored in a computer read/write memory medium In, described program is upon execution, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, described storage medium can be Magnetic disc, CD, read-only memory (Read-Only Memory, ROM) or random access memory (Random Access Memory, RAM) etc..
The foregoing description of the disclosed embodiments, professional and technical personnel in the field are enable to realize or using the present invention. A variety of modifications to these embodiments will be apparent for those skilled in the art, as defined herein General Principle can be realized in other embodiments without departing from the spirit or scope of the present invention.Therefore, it is of the invention The embodiments shown herein is not intended to be limited to, and is to fit to and principles disclosed herein and features of novelty phase one The most wide scope caused.

Claims (10)

1. a kind of information processing method, applied to wearable electronic equipment, the wearable electronic equipment includes image collector Put, it is characterised in that described information processing method includes:
Display content is shown using wear-type Display Technique;
Virtual plane where obtaining the display is joined according to the first distance of the first reference planes, described first in a first direction Examine the diameter parallel of the axis of plane in a first direction and described image harvester in the first direction;
Pass through the first action of the operating body of described image harvester collection in a first direction;
The operation planar at the first action place is obtained in a first direction according to the second distance of first reference planes;
According to first distance and the second distance, the operation planar at the first action place and the display institute are determined Virtual plane between position relationship, with according to the position relationship determine it is described first action where operation planar be No is the virtual plane where the display.
2. according to the method for claim 1, it is characterised in that when it is determined that the operation planar where first action is institute When stating the virtual plane at display place, methods described also includes:
The first image when the operating body performs first action is obtained by described image harvester;
Application image identification technology identifies described first image, obtains the operating body when performing first action described The position formed on virtual plane;
The position that is formed on virtual plane during based on the described first action, obtain and perform corresponding with the first action the One instruction;
The display content obtained after first instruction will be performed including on the virtual plane using wear-type Display Technique.
3. according to the method for claim 1, it is characterised in that obtain the operation planar at the first action place first Include on direction according to the second distance of first reference planes:
The focal distance relative to the operation planar obtained using the Autofocus Technology of described image harvester is institute State second distance.
4. according to the method for claim 1, it is characterised in that methods described also includes:Obtain described image harvester In said first direction according to the 3rd distance of first reference planes;
Described image harvester is obtained in said first direction according to the 4th distance of the operation planar, the 4th distance For the focal distance relative to the virtual plane obtained using the Autofocus Technology of described image harvester;
4th distance and the described 3rd apart from sum is the second distance, or the 4th distance and the described 3rd away from Poor from it is the second distance.
5. the method according to claim 3 or 4, it is characterised in that according to first distance and the second distance, really Position relationship between operation planar where fixed first action and the virtual plane where the display, including:
First distance and the second distance are compared, obtain comparison result;
When the comparison result shows that first distance is identical with the second distance, the first action place is determined Operation planar is the virtual plane where the display.
6. a kind of information processor, applied to wearable electronic equipment, the wearable electronic equipment includes image collector Put, it is characterised in that described information processing unit includes:
Display unit, for being shown using wear-type Display Technique to display content;
First acquisition unit, for obtaining the virtual plane where the display in a first direction according to the of the first reference planes The axis of one distance, the axis of first reference planes in a first direction and described image harvester in the first direction It is parallel;
Collecting unit, the first action for the operating body by the collection of described image harvester in a first direction;
Second acquisition unit, referred in a first direction according to described first for the operation planar where obtaining first action The second distance of plane;
Determining unit, for according to first distance and the second distance, the operation where determining first action to be put down The position relationship between virtual plane where face and the display, to determine the first action institute according to the position relationship Operation planar whether be the display where virtual plane.
7. device according to claim 6, it is characterised in that described device also includes:
3rd acquiring unit, for obtaining the when the operating body performs first action by described image harvester One image;
Recognition unit, described first image is identified for application image identification technology, obtain the operating body and performing described the The position formed during one action on the virtual plane;
Execution unit, the position that is formed on virtual plane during for based on the described first action, obtain and perform and the First instruction corresponding to one action;
The display unit is also used for obtained display content after wear-type Display Technique will perform first instruction and shown Show on the virtual plane.
8. device according to claim 6, it is characterised in that the second acquisition unit is obtained where first action Operation planar include in a first direction according to the second distance of first reference planes:Use described image harvester The focal distance relative to the operation planar that Autofocus Technology obtains is the second distance.
9. device according to claim 6, it is characterised in that the second acquisition unit is obtained where first action Operation planar include in a first direction according to the second distance of first reference planes:Described image harvester is obtained to exist According to the 3rd distance and acquisition described image harvester of first reference planes described first on the first direction According to the 4th distance of the operation planar on direction, using the 4th distance and the described 3rd apart from sum as described second away from From, or using the 4th distance and the described 3rd apart from its difference as the second distance, the 4th distance is uses institute State the focal distance relative to the virtual plane that the Autofocus Technology of image collecting device obtains.
10. device according to claim 8 or claim 9, it is characterised in that the determining unit is according to first distance and institute State second distance, determine it is described first action where operation planar and the display where virtual plane between position close System, including:
First distance and the second distance are compared, obtain comparison result;
When the comparison result shows that first distance is identical with the second distance, the first action place is determined Operation planar is the virtual plane where the display.
CN201410486659.2A 2014-09-22 2014-09-22 A kind of information processing method and device Active CN104199556B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410486659.2A CN104199556B (en) 2014-09-22 2014-09-22 A kind of information processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410486659.2A CN104199556B (en) 2014-09-22 2014-09-22 A kind of information processing method and device

Publications (2)

Publication Number Publication Date
CN104199556A CN104199556A (en) 2014-12-10
CN104199556B true CN104199556B (en) 2018-01-16

Family

ID=52084857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410486659.2A Active CN104199556B (en) 2014-09-22 2014-09-22 A kind of information processing method and device

Country Status (1)

Country Link
CN (1) CN104199556B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105204625B (en) * 2015-08-31 2018-05-08 小米科技有限责任公司 Safety protection method and device in reality-virtualizing game
KR20180014492A (en) * 2016-08-01 2018-02-09 삼성전자주식회사 Method for image display and electronic device supporting the same
CN106951087B (en) * 2017-03-27 2020-02-21 联想(北京)有限公司 Interaction method and device based on virtual interaction plane
CN111766937B (en) * 2019-04-02 2024-05-28 广东虚拟现实科技有限公司 Virtual content interaction method and device, terminal equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101206380A (en) * 2006-12-21 2008-06-25 亚洲光学股份有限公司 Method for measuring distance by digital camera
CN102207770A (en) * 2010-03-30 2011-10-05 哈曼贝克自动系统股份有限公司 Vehicle user interface unit for a vehicle electronic device
CN103713387A (en) * 2012-09-29 2014-04-09 联想(北京)有限公司 Electronic device and acquisition method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2377147A (en) * 2001-06-27 2002-12-31 Nokia Corp A virtual reality user interface

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101206380A (en) * 2006-12-21 2008-06-25 亚洲光学股份有限公司 Method for measuring distance by digital camera
CN102207770A (en) * 2010-03-30 2011-10-05 哈曼贝克自动系统股份有限公司 Vehicle user interface unit for a vehicle electronic device
CN103713387A (en) * 2012-09-29 2014-04-09 联想(北京)有限公司 Electronic device and acquisition method

Also Published As

Publication number Publication date
CN104199556A (en) 2014-12-10

Similar Documents

Publication Publication Date Title
US20220004758A1 (en) Eye pose identification using eye features
AU2016310451B2 (en) Eyelid shape estimation using eye pose measurement
CN105933589B (en) A kind of image processing method and terminal
CN112666714B (en) Gaze direction mapping
JP2019527377A (en) Image capturing system, device and method for automatic focusing based on eye tracking
JP5777582B2 (en) Detection and tracking of objects in images
CN102830797B (en) A kind of man-machine interaction method based on sight line judgement and system
CN108320333B (en) Scene adaptive virtual reality conversion equipment and virtual reality scene adaptive method
CN104199556B (en) A kind of information processing method and device
CN104978548A (en) Visual line estimation method and visual line estimation device based on three-dimensional active shape model
CN106326867A (en) Face recognition method and mobile terminal
WO2013155217A1 (en) Realistic occlusion for a head mounted augmented reality display
EP3540574B1 (en) Eye tracking method, electronic device, and non-transitory computer readable storage medium
WO2013185714A1 (en) Method, system, and computer for identifying object in augmented reality
CN106407772A (en) Human-computer interaction and identity authentication device and method suitable for virtual reality equipment
JP5776323B2 (en) Corneal reflection determination program, corneal reflection determination device, and corneal reflection determination method
CN103475893A (en) Device and method for picking object in three-dimensional display
KR102463172B1 (en) Method and apparatus for determining inter-pupilary distance
US20170289518A1 (en) Apparatus for replaying content using gaze recognition and method thereof
Pires et al. Unwrapping the eye for visible-spectrum gaze tracking on wearable devices
EP4172708A1 (en) Visual-inertial tracking using rolling shutter cameras
CN105867607A (en) Menu selection method and device of virtual reality helmet and virtual reality helmet
CN110503068A (en) Gaze estimation method, terminal and storage medium
US8970479B1 (en) Hand gesture detection
Perra et al. Adaptive eye-camera calibration for head-worn devices

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant