CN104239877A - Image processing method and image acquisition device - Google Patents

Image processing method and image acquisition device Download PDF

Info

Publication number
CN104239877A
CN104239877A CN201310244619.2A CN201310244619A CN104239877A CN 104239877 A CN104239877 A CN 104239877A CN 201310244619 A CN201310244619 A CN 201310244619A CN 104239877 A CN104239877 A CN 104239877A
Authority
CN
China
Prior art keywords
realtime graphic
image capture
capture device
image
acquisition target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310244619.2A
Other languages
Chinese (zh)
Other versions
CN104239877B (en
Inventor
侯欣如
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201310244619.2A priority Critical patent/CN104239877B/en
Publication of CN104239877A publication Critical patent/CN104239877A/en
Application granted granted Critical
Publication of CN104239877B publication Critical patent/CN104239877B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides an image processing method and an image acquisition device. The image processing method is applied to the image acquisition device and comprises the steps that a first scene is acquired, and an acquired real-time image is displayed; whether an operating body appears in the real-time image is determined; when it is determined that the operating body appears in the real-time image, an acquired object, corresponding to the operating body in position, in the real-time image is taken as a target object; the target area of the target object in the real-time image is determined; image processing is conducted on the target area.

Description

The method of image procossing and image capture device
Technical field
The present invention relates to and be a kind ofly applied to the method for the image procossing of image capture device and corresponding image capture device.
Background technology
In recent years, such as the electronic equipment of notebook, tablet computer, smart mobile phone, camera and portable media player and so on is widely used.These electronic equipments generally include the image acquisition component of such as camera and so on, so that the image acquisition operation that user such as can take pictures easily, to photograph and so on.But, when user carries out image acquisition, the situation occurring undesirable subject in the image of shooting often can be run into.Such as, in tourist attractions, when user wishes to carry out taking pictures souvenir for its kith and kin, be usually difficult to avoid other unacquainted strangers, and when the angle avoiding stranger is taken pictures, intactly may not take again the sight spot that user wishes.
Summary of the invention
The object of the embodiment of the present invention is the method and the image capture device that provide a kind of image procossing, to solve the problem.
Embodiments provide a kind of method of image procossing, be applied to image capture device.Described method comprises: gather the first scene, and shows the realtime graphic gathered; Determine whether occurred operating body in realtime graphic; When determining to have occurred operating body in realtime graphic, using acquisition target corresponding with the position of operating body in realtime graphic as destination object; Determine the target area of destination object in realtime graphic; Image procossing is carried out to target area.
Another embodiment of the present invention provides a kind of image capture device, comprising: collecting unit, and configuration gathers the first scene; Display unit, configuration shows gathered realtime graphic; Operating body recognition unit, configuration determines whether occurred operating body in realtime graphic; Object determining unit, configuration comes when determining to have occurred operating body in realtime graphic, using acquisition target corresponding with the position of operating body in realtime graphic as destination object; Area determination unit, the target area of destination object in realtime graphic is determined in configuration; And graphics processing unit, configuration carries out image procossing to target area.
In the scheme that the invention described above embodiment provides, by in the realtime graphic that gathers at image capture device, image procossing is carried out to the target area determined according to the position of operating body, effectively can shield the object that user does not wish to pay close attention to, thus minimizing user does not wish that the object paid close attention to is to the interference of shown content, and user does not need to exchange shooting angle to hide its object not wishing to pay close attention to.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme of the embodiment of the present invention, be briefly described to the accompanying drawing used required in the description of embodiment below.Accompanying drawing in the following describes is only exemplary embodiment of the present invention.
Fig. 1 depicts the process flow diagram of the method for the image procossing according to the embodiment of the present invention.
Fig. 2 a to Fig. 2 c shows according to an example of the present invention, determines the key diagram of the illustrative case of destination object in realtime graphic.
Fig. 3 a shows according to an example of the present invention, is determining destination object as shown in Figure 2, and after determining the target area of destination object in realtime graphic, target area is carried out to the key diagram of the illustrative case of image procossing.
Fig. 3 b shows according to another example of the present invention, is determining destination object as shown in Figure 2, and after determining the target area of destination object in realtime graphic, target area is carried out to the key diagram of the illustrative case of image procossing.
Fig. 4 is the exemplary block diagram of the image capture device illustrated according to the embodiment of the present invention.
Fig. 5 shows the key diagram that the image capture device shown in Fig. 4 is an illustrative case of spectacle image capture device.
Fig. 6 shows the block diagram according to the display unit in image capture device.
Fig. 7 shows the key diagram of a signal situation of the display unit shown in Fig. 6.
Embodiment
Hereinafter, the preferred embodiments of the present invention are described in detail with reference to accompanying drawing.Note, in the present description and drawings, there is substantially the same step and represent with the identical Reference numeral of element, and will be omitted the repetition of explanation of these steps and element.
In following examples of the present invention, the concrete form of image capture device includes but not limited to intelligent mobile phone, personal computer, personal digital assistant, portable computer, tablet computer, multimedia player etc.According to an example of the present invention, image capture device can be hand-hold electronic equipments.Preferably, according to another example of the present invention, image capture device can be wear electronic equipment.In addition, according to another example of the present invention, image capture device can comprise lens component and the collecting unit that arrange corresponding to described lens component and display unit.
Fig. 1 depicts the process flow diagram of the method 100 according to the image procossing of the embodiment of the present invention.Below, the method for the image procossing according to the embodiment of the present invention is described with reference to Fig. 1.The method 100 of image procossing can be used for above-mentioned image capture device.
As shown in Figure 1, in step S101, the first scene is gathered, and show the realtime graphic gathered.As mentioned above, according to an example of the present invention, image capture device can comprise lens component and the collecting unit that arrange corresponding to described lens component and display unit.In the case, in step S101, when lens component is arranged in the viewing area of user, by collecting unit, the first scene that user's transmitting lens chip part is watched is gathered, and obtain realtime graphic.Such as, lens component can be arranged on above or below collecting unit.Therefore, when lens component is arranged in the viewing area of user, collecting unit can carry out image acquisition along the direction similar with user's view direction, thus obtains the real-time image acquisition of user's transmitting lens chip part viewing.
In step s 102, determine whether occurred operating body in realtime graphic.According to an example of the present invention, operating body can be the finger of user.Alternatively, operating body also can be the operating pen etc. pre-set.
When determining to have occurred operating body in realtime graphic, in step s 103, using acquisition target corresponding with the position of operating body in realtime graphic as destination object.According to an example of the present invention, in step s 103, first can identify the acquisition target in realtime graphic, and obtain each acquisition target primary importance in realtime graphic.Then, in identified acquisition target, determine that its primary importance first acquisition target corresponding with the position of operating body is as destination object.
Selectively, after determining the acquisition target that its primary importance is corresponding with the position of operating body, in step s 103, also can determine the first distance between object corresponding with the first acquisition target in the first scene and image capture device, and determine the second distance between object corresponding with the acquisition target except the first acquisition target in the first scene and image capture device, then in the object of the first scene, determine that the difference between its second distance and the first distance is less than or equal to the target object of preset distance difference, finally using the acquisition target corresponding to target object as destination object.In example according to the present invention, the object in the first scene can be do not have lived object, such as buildings, trees etc.In addition, the object in the first scene also can be lived object, such as people, animal etc.
Fig. 2 a to Fig. 2 c shows according to an example of the present invention, determines the key diagram of the illustrative case of destination object in realtime graphic.In the example shown in Fig. 2 a to Fig. 2 c, show the realtime graphic 200 of the first scene gathered.As shown in Figure 2 a, when determining to have occurred operating body 210 in realtime graphic 200, first identifying the acquisition target 221,222,223 and 224 in realtime graphic, and obtaining acquisition target 221,222,223 and 224 primary importance in realtime graphic.Such as, contours extract can be performed to realtime graphic 200, thus obtain the profile of each acquisition target, thus determine each acquisition target and position thereof.
Then, as shown in dotted line frame in Fig. 2 b, in realtime graphic 200, determine that its primary importance acquisition target 222 corresponding with the position of operating body 210 is as destination object.In addition, as shown in the dotted line frame in Fig. 2 c, determine the first distance between object corresponding with the first acquisition target 222 in the first scene and image capture device, and determine in the first scene with the acquisition target 221 except the first acquisition target 222, second distance between 223 and 224 corresponding objects and image capture device.Then the first scene object (namely, object corresponding to object 221,222,223 and 224) in, determine that the difference between its second distance and the first distance is less than or equal to the target object of preset distance difference, finally using the acquisition target 223 and 224 corresponding to target object also as destination object.Thus user once can specify in the multiple destination objects in realtime graphic, and do not need to specify destination object one by one, facilitate the operation of user.
Return Fig. 1, in step S104, determine the target area of destination object in realtime graphic.Then, in step S105, image procossing is carried out to target area.Fig. 3 a shows according to an example of the present invention, is determining destination object as shown in Figure 2, and after determining the target area of destination object in realtime graphic 200, target area is carried out to the key diagram of the illustrative case of image procossing.As shown in Figure 3 a, according to an example of the present invention, in step S105, regulate the transparency of determined target area 311,312 and 313, so that virtualization process can be carried out to target area 311,312 and 313, thus virtualization destination object 222,223 and 224.
Fig. 3 b shows according to another example of the present invention, is determining destination object as shown in Figure 2, and after determining the target area of destination object in realtime graphic 200, target area is carried out to the key diagram of the illustrative case of image procossing.As shown in Figure 3 b, according to an example of the present invention, in step S105, second image 321,322 and 323 relevant to realtime graphic 200 can be obtained, and described second image 321,322 and 323 is filled in target area 311,312 and 313 to cover described destination object.
In the method for the image procossing provided at above-mentioned the present embodiment, by in the realtime graphic that gathers at image capture device, image procossing is carried out to the target area determined according to the position of operating body, effectively can shield the object that user does not wish to pay close attention to, thus minimizing user does not wish that the object paid close attention to is to the interference of shown content, and user does not need to exchange shooting angle to hide its object not wishing to pay close attention to.
Below, with reference to Fig. 4, image capture device is according to an embodiment of the invention described.Fig. 4 is the exemplary block diagram of the image capture device 400 illustrated according to the embodiment of the present invention.As shown in Figure 4, the image capture device 400 of the present embodiment comprises collecting unit 410, display unit 420, operating body recognition unit 430, object determining unit 440, area determination unit 450 and graphics processing unit 460.The modules of image capture device 400 performs each step/function of the method 100 of the matching unit in above-mentioned Fig. 1, therefore, succinct in order to describe, and no longer specifically describes.
Such as, collecting unit 410 can gather the first scene, and display unit 420 can show gathered realtime graphic.According to an example of the present invention, image capture device 400 also can comprise lens component.Collecting unit 410 and display unit 420 can correspondingly with lens component be arranged.In the case, when lens component is arranged in the viewing area of user, the first scene by collecting unit 410 pairs of user's transmitting lens chip part viewings gathers, and obtains realtime graphic.Such as, lens component can be arranged on above or below collecting unit.Therefore, when lens component is arranged in the viewing area of user, collecting unit 410 can carry out image acquisition along the direction similar with user's view direction, thus obtains the real-time image acquisition of user's transmitting lens chip part viewing.
Operating body recognition unit 430 can determine whether occurred operating body in realtime graphic.According to an example of the present invention, operating body can be the finger of user.Alternatively, operating body also can be the operating pen etc. pre-set.
When determining to have occurred operating body in realtime graphic, object determining unit 440 can using acquisition target corresponding with the position of operating body in realtime graphic as destination object.According to an example of the present invention, object determining unit 440 can comprise Object identifying module and object determination module.Acquisition target in Object identifying module identifiable design realtime graphic, and obtain each acquisition target primary importance in realtime graphic.Then object determination module is in identified acquisition target, determines that its primary importance first acquisition target corresponding with the position of operating body is as destination object.
Selectively, object determining unit 440 also can comprise distance determination module and object determination module.After determining the acquisition target that its primary importance is corresponding with the position of operating body, distance determination module can determine the first distance between object corresponding with the first acquisition target in the first scene and image capture device, and determines the second distance between object corresponding with the acquisition target except the first acquisition target in the first scene and image capture device.Then object determination module is in the object of the first scene, determines that the difference between its second distance and the first distance is less than or equal to the target object of preset distance difference.Last described object determination module also can using the acquisition target corresponding to target object as destination object.In example according to the present invention, the object in the first scene can be do not have lived object, such as buildings, trees etc.In addition, the object in the first scene also can be lived object, such as people, animal etc.
Area determination unit 450 can determine the target area of destination object in realtime graphic.Then graphics processing unit 460 can carry out image procossing to target area.According to an example of the present invention, graphics processing unit 460 can carry out virtualization process to target area, with virtualization destination object.Can know from experience ground, according to another example of the present invention, graphics processing unit can obtain second image relevant to realtime graphic, and by the second image completion to target area with coverage goal object.Then, the image handled by display unit displayable image processing unit 460.
In the image capture device that above-mentioned the present embodiment provides, by in the realtime graphic that gathers at image capture device, image procossing is carried out to the target area determined according to the position of operating body, effectively can shield the object that user does not wish to pay close attention to, thus minimizing user does not wish that the object paid close attention to is to the interference of shown content, and user does not need to exchange shooting angle to hide its object not wishing to pay close attention to.
In addition, as mentioned above, preferably, according to an example of the present invention, image capture device can be wear electronic equipment.Such as, image capture device is spectacle image capture device.Fig. 5 shows the key diagram that the image capture device 400 shown in Fig. 4 is an illustrative case of spectacle image capture device.For simplicity, no longer composition graphs 5 describes spectacle image capture device 500 part similar with image capture device 400.
As shown in Figure 5, image capture device 500 also can comprise picture frame module 510, lens component 520 and fixed cell.Lens component 520 is arranged in picture frame module 510.The fixed cell of image capture device 500 comprises the first sway brace 531 and the second sway brace 532.As shown in Figure 3, the first sway brace comprises the first pontes 531(as shown in the dash area in Fig. 5) and the first retaining part 532.The first pontes 531 connects picture frame module 510 and the first retaining part 532.Second sway brace comprises the second coupling part 541(as shown in the dash area in Fig. 5) and the second retaining part 542.Second coupling part 541 connects picture frame module 510 and the second retaining part 542.In addition, can the 3rd retaining part (not shown) be set in picture frame module 510.Particularly, the 3rd retaining part can be arranged on the position of picture frame module 510 between two eyeglasses.By the first retaining part, the second retaining part and the 3rd retaining part, wear-type electronic equipment is maintained at the head of user.Particularly, the first retaining part and the second retaining part can be used for the ear the first sway brace and the second sway brace being supported on user, and the 3rd retaining part can be used for bridge of the nose place picture frame module 510 being supported on user.
In the present embodiment, the collecting unit (not shown) of image capture device 500 can be arranged accordingly with lens component 520, with determine the image that collecting unit gathers and the scene that user sees basically identical.Such as, collecting unit can be arranged in the picture frame module 510 between two lens component.Alternatively, the collecting unit of image capture device 500 also can be arranged in picture frame module 510 with in lens component accordingly.In addition, the collecting unit of image capture device 500 also can comprise two acquisition modules, and be arranged on accordingly respectively in picture frame module 510 with two eyeglasses, collecting unit can process the image that two acquisition modules gather, with the image gathered in conjunction with two acquisition modules, make the scene that the image after processing truly is seen closer to user.
Fig. 6 shows the block diagram according to the display unit 600 in image capture device 500.As shown in Figure 6, display unit 600 can comprise the first display module 610, first optical system 620, first light guide member 630 and the second light guide member 640.Fig. 5 shows the key diagram of a signal situation of the display unit 600 shown in Fig. 6.
First display module 610 can be arranged in picture frame module 510, and is connected with first data transmission line.The first vision signal that first display module 610 can transmit according to the first data transmission line of image capture device 500 shows the first image.First data transmission line can be arranged in fixed cell and picture frame module.Display can be sent to display unit by first data transmission line.Display unit can show to user according to display.In addition, although be described for data line in the present embodiment, the present invention is not limited thereto, such as, according to another example of the present invention, also by wireless transmission method, display is sent to display unit.In addition, according to an example of the present invention, the first display module 610 can be the display module of the miniature display screen that size is less.
First optical system 620 also can be arranged in picture frame module 510.First optical system 620 can receive the light sent from the first display module, and carries out light path converting to the light sent from the first display module, to form the first amplification virtual image.That is, the first optical system 620 has positive refractive power.Thus user can know viewing first image, and the size of image that user watches is by the restriction of the size of display unit.
Such as, optical system can comprise with convex lens.Alternatively, in order to the interference reducing aberration, avoid dispersion etc. to cause imaging, bring user better visual experience, optical system also can by the multiple lens forming lens subassemblies comprising convex lens and concavees lens.In addition, according to an example of the present invention, the first display module 610 and the first optical system 620 can be set accordingly along the optical axis of 4 optical systems.Alternatively, according to another example of the present invention, display unit also can comprise the 5th light guide member, so that the light launched from the first display module 610 is sent to the first optical system 620.
As shown in Figure 7, the light sent from the first display module 610 is received in the first optical system 620, and after carrying out light path converting to the light sent from the first display module 610, the light through the first optical system can be sent to the second light guide member 640 by the first light guide member 630.Second light guide member 640 can be arranged in lens component 520.And the second light guide member can receive the light transmitted by the first light guide member 630, and the light transmitted by the first light guide member 630 reflects to the eyes of the user wearing wear-type electronic equipment.
Return Fig. 5, lens component 520 meets the first predetermined transmittance on the direction from inner side to outside, makes user can watch surrounding environment while the virtual image is amplified in viewing first.Such as, image generation unit is arranged according to described image, when generating the first image about described destination object, display unit shows the first image generated, make user while see the destination object in the first scene through eyeglass, see the first image be superimposed upon on destination object shown by display unit.
Those of ordinary skill in the art can recognize, in conjunction with unit and the algorithm steps of each example of embodiment disclosed herein description, can realize with electronic hardware, computer software or the combination of the two.And software module can be placed in the computer-readable storage medium of arbitrary form.In order to the interchangeability of hardware and software is clearly described, generally describe composition and the step of each example in the above description according to function.These functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Those skilled in the art can use distinct methods to realize described function to each specifically should being used for, but this realization should not thought and exceeds scope of the present invention.
It should be appreciated by those skilled in the art that and can be dependent on design requirement and other factors carries out various amendment, combination, incorporating aspects and replacement to the present invention, as long as they are in the scope of appended claims and equivalent thereof.

Claims (12)

1. a method for image procossing, is applied to image capture device, and described method comprises:
First scene is gathered, and shows the realtime graphic gathered;
Determine whether occurred operating body in described realtime graphic;
When determining to have occurred described operating body in described realtime graphic, using acquisition target corresponding with the position of described operating body in described realtime graphic as destination object;
Determine the target area of described destination object in described realtime graphic;
Image procossing is carried out to described target area.
2. the method for claim 1, wherein
Described image capture device comprises lens component and the collecting unit that arrange corresponding to described lens component and display unit;
Describedly to gather, and show the realtime graphic gathered and comprise:
When described lens component is arranged in the viewing area of user, by described collecting unit, described first scene that described user watches through described lens component is gathered, and obtain described realtime graphic.
3. the method for claim 1, wherein saidly comprises acquisition target corresponding with the position of described operating body in described realtime graphic as destination object:
Identify the acquisition target in described realtime graphic, and obtain each described acquisition target primary importance in described realtime graphic;
In described acquisition target, determine that its primary importance first acquisition target corresponding with the position of described operating body is as described destination object.
4. method as claimed in claim 3, wherein saidly also comprises acquisition target corresponding with the position of described operating body in described realtime graphic as destination object:
Determine the first distance between object corresponding with described first acquisition target in described first scene and described image capture device;
Determine the second distance between object corresponding with the acquisition target except described first acquisition target in described first scene and described image capture device;
In the object of described first scene, determine that the difference between its second distance and described first distance is less than or equal to the target object of preset distance difference;
Using the acquisition target corresponding to described target object as described destination object.
5. the method for claim 1, wherein saidly image procossing is carried out to described target area comprise:
Virtualization process is carried out to described target area, with destination object described in virtualization.
6. the method for claim 1, wherein saidly image procossing is carried out to described target area comprise:
Obtain second image relevant to described realtime graphic; And
By in described second image completion to described target area to cover described destination object.
7. an image capture device, comprising:
Collecting unit, configuration gathers the first scene;
Display unit, configuration shows gathered realtime graphic;
Operating body recognition unit, configuration determines whether occurred operating body in described realtime graphic;
Object determining unit, configuration comes when determining to have occurred described operating body in described realtime graphic, using acquisition target corresponding with the position of described operating body in described realtime graphic as destination object;
Area determination unit, the target area of described destination object in described realtime graphic is determined in configuration; And
Graphics processing unit, configuration carries out image procossing to described target area.
8. image capture device as claimed in claim 6, wherein said object determining unit comprises:
Object identifying module, configuration identifies the acquisition target in described realtime graphic, and obtains each described acquisition target primary importance in described realtime graphic;
Object determination module, configuration comes in described acquisition target, determines that its primary importance first acquisition target corresponding with the position of described operating body is as described destination object.
9. image capture device as claimed in claim 8, wherein said object determining unit also comprises:
Distance determination module, the first distance between object corresponding with described first acquisition target in described first scene and described image capture device is determined in configuration, and determines the second distance between object corresponding with the acquisition target except described first acquisition target in described first scene and described image capture device; And
Object determination module, configuration comes in the object of described first scene, determines that the difference between its second distance and described first distance is less than or equal to the target object of preset distance difference,
Wherein said object determination module also configures the acquisition target corresponding to described target object as described destination object.
10. image capture device as claimed in claim 7, wherein
Described graphics processing unit carries out virtualization process to described target area, with destination object described in virtualization.
11. image capture devices as claimed in claim 7, wherein
Described graphics processing unit obtains second image relevant to described realtime graphic, and by described second image completion to described target area to cover described destination object.
12. image capture devices as claimed in claim 7, wherein said image capture device is spectacle image capture device, and described image capture device also comprises:
Frame member;
Lens component, is arranged in described frame member;
Fixed cell comprises:
First sway brace, comprises the first pontes and the first retaining part, and wherein said the first pontes configuration connects described frame member and described first retaining part;
Second sway brace, comprises the second coupling part and the second retaining part, and wherein said second coupling part configuration connects described frame member and described second retaining part,
Wherein, described frame member comprises the 3rd retaining part,
Described image capture device is remained on the head of user by described first retaining part, described second retaining part and described 3rd retaining part configuration,
Described collecting unit and described lens component are arranged accordingly,
Described display unit, comprises:
First display module, is arranged in described frame member;
First optical system, is arranged in described frame member, and configuration receives the light sent from described first display module, and carries out light path converting to the light sent from described first display module, to form the first amplification virtual image;
First light guide member, the light through described first optical system is sent to the second light guide member by configuration;
Described second light guide member, is arranged in described lens component, configures and is reflected by the eyes of the light of described first light guide member transmission to the user wearing described image capture device.
CN201310244619.2A 2013-06-19 2013-06-19 The method and image capture device of image procossing Active CN104239877B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310244619.2A CN104239877B (en) 2013-06-19 2013-06-19 The method and image capture device of image procossing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310244619.2A CN104239877B (en) 2013-06-19 2013-06-19 The method and image capture device of image procossing

Publications (2)

Publication Number Publication Date
CN104239877A true CN104239877A (en) 2014-12-24
CN104239877B CN104239877B (en) 2019-02-05

Family

ID=52227902

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310244619.2A Active CN104239877B (en) 2013-06-19 2013-06-19 The method and image capture device of image procossing

Country Status (1)

Country Link
CN (1) CN104239877B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105898346A (en) * 2016-04-21 2016-08-24 联想(北京)有限公司 Control method, electronic equipment and control system
CN106033622A (en) * 2015-03-19 2016-10-19 联想(北京)有限公司 Data collection method and device and target object reconstructing method and device
CN111208906A (en) * 2017-03-28 2020-05-29 联想(北京)有限公司 Method and display system for presenting image

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1784649A (en) * 2003-04-08 2006-06-07 智能技术公司 Auto-aligning touch system and method
CN1904806A (en) * 2006-07-28 2007-01-31 上海大学 System and method of contactless position input by hand and eye relation guiding
CN101510957A (en) * 2008-02-15 2009-08-19 索尼株式会社 Image processing device, camera device, communication system, image processing method, and program
US20120092328A1 (en) * 2010-10-15 2012-04-19 Jason Flaks Fusing virtual content into real content
CN102681651A (en) * 2011-03-07 2012-09-19 刘广松 User interaction system and method
CN102701033A (en) * 2012-05-08 2012-10-03 华南理工大学 Elevator key and method based on image recognition technology

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1784649A (en) * 2003-04-08 2006-06-07 智能技术公司 Auto-aligning touch system and method
CN1904806A (en) * 2006-07-28 2007-01-31 上海大学 System and method of contactless position input by hand and eye relation guiding
CN101510957A (en) * 2008-02-15 2009-08-19 索尼株式会社 Image processing device, camera device, communication system, image processing method, and program
US20120092328A1 (en) * 2010-10-15 2012-04-19 Jason Flaks Fusing virtual content into real content
CN102681651A (en) * 2011-03-07 2012-09-19 刘广松 User interaction system and method
CN102701033A (en) * 2012-05-08 2012-10-03 华南理工大学 Elevator key and method based on image recognition technology

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106033622A (en) * 2015-03-19 2016-10-19 联想(北京)有限公司 Data collection method and device and target object reconstructing method and device
CN106033622B (en) * 2015-03-19 2020-03-24 联想(北京)有限公司 Data acquisition and target object reconstruction method and device
CN105898346A (en) * 2016-04-21 2016-08-24 联想(北京)有限公司 Control method, electronic equipment and control system
CN111208906A (en) * 2017-03-28 2020-05-29 联想(北京)有限公司 Method and display system for presenting image
CN111208906B (en) * 2017-03-28 2021-12-24 联想(北京)有限公司 Method and display system for presenting image

Also Published As

Publication number Publication date
CN104239877B (en) 2019-02-05

Similar Documents

Publication Publication Date Title
EP3011418B1 (en) Virtual object orientation and visualization
US9245389B2 (en) Information processing apparatus and recording medium
EP2824541B1 (en) Method and apparatus for connecting devices using eye tracking
US11314323B2 (en) Position tracking system for head-mounted displays that includes sensor integrated circuits
CN105718046A (en) Head-Mount Display for Eye Tracking based on Mobile Device
KR20160113139A (en) Gaze swipe selection
JP2015507860A (en) Guide to image capture
US20150015542A1 (en) Control Method And Electronic Device
KR20130034125A (en) Augmented reality function glass type monitor
KR20210052570A (en) Determination of separable distortion mismatch
EP4044000A1 (en) Display method, electronic device, and system
CN107835404A (en) Method for displaying image, equipment and system based on wear-type virtual reality device
CN104239877A (en) Image processing method and image acquisition device
US20160189341A1 (en) Systems and methods for magnifying the appearance of an image on a mobile device screen using eyewear
CN104077784A (en) Method for extracting target object and electronic device
US11900058B2 (en) Ring motion capture and message composition system
CN109947240B (en) Display control method, terminal and computer readable storage medium
CN111464781A (en) Image display method, image display device, storage medium, and electronic apparatus
CN104062758B (en) Image display method and display equipment
US9898661B2 (en) Electronic apparatus and method for storing data
US20230217007A1 (en) Hyper-connected and synchronized ar glasses
CN103677704B (en) Display device and display methods
US11979667B1 (en) Hyperlapse imaging using wearable devices
WO2021057420A1 (en) Method for displaying control interface and head-mounted display
CN108415158A (en) A kind of device and method for realizing augmented reality

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant