CN103150009A - Information processing apparatus, information processing method, and program - Google Patents

Information processing apparatus, information processing method, and program Download PDF

Info

Publication number
CN103150009A
CN103150009A CN2012102544608A CN201210254460A CN103150009A CN 103150009 A CN103150009 A CN 103150009A CN 2012102544608 A CN2012102544608 A CN 2012102544608A CN 201210254460 A CN201210254460 A CN 201210254460A CN 103150009 A CN103150009 A CN 103150009A
Authority
CN
China
Prior art keywords
operator
blinkpunkt
virtual view
motion
unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012102544608A
Other languages
Chinese (zh)
Other versions
CN103150009B (en
Inventor
藤原达雄
尾上直之
山下润一
孙赟
小林直树
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Interactive Entertainment Inc
Sony Corp
Original Assignee
Sony Corp
Sony Computer Entertainment Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp, Sony Computer Entertainment Inc filed Critical Sony Corp
Publication of CN103150009A publication Critical patent/CN103150009A/en
Application granted granted Critical
Publication of CN103150009B publication Critical patent/CN103150009B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation

Abstract

The invention relates to an information processing apparatus, an information processing method, and a program. An information processing apparatus includes an image generation unit configured to generate a viewpoint image in a case where a watch point set in a three-dimensional virtual space is viewed from a predetermined virtual viewpoint, a detection unit configured to detect a movement of an operator, and a viewpoint displacement unit configured to displace the virtual viewpoint with the set watch point as a reference on the basis of the detected movement of the operator.

Description

Signal conditioning package, information processing method and program
Technical field
The disclosure relates to signal conditioning package, information processing method and the program that can show image etc. in virtual three dimensional space.
Background technology
Known a kind of imageing sensor that utilizes is measured the system that obtains operator's image as the facial parameters of head pose, direction of gaze, eyes closed and expression.For example, at the open No.2009-517745(of Jap.P. translation hereinafter referred to as patent documentation 1) in disclosed, the facial parameters that measures is used for the dialogue between the field measuring system such as man-machine interface (HMI) design and operator.Further, patent documentation 1 discloses a kind of driver's backup system (referring to the 0005th, 0006 section of patent documentation 1 etc.) that draws tired and dispersion attention information by facial parameters.
Summary of the invention
Further, patent documentation 1 discloses: along with the development of the interface relevant to PC, and the face that utilizes above-mentioned facial parameters can become the same with the propagation of computer keyboard and mouse universal (ubiquity) with the tracing system of watching attentively (referring to the 0007th section of patent documentation 1 etc.).In other words, think that the information handling system based on operator motion as above-mentioned tracing system can become universal.
In view of the foregoing, be desirable to provide signal conditioning package, information processing method and the program that can obtain based on the image display system with high operability of operator's motion.
According to embodiment of the present disclosure, a kind of signal conditioning package is provided, comprise image generation unit, detecting unit and viewpoint shift unit.
Image generation unit is configured to generate from predetermined virtual view to be observed in the situation that the visual point image of the blinkpunkt that arranges in virtual three dimensional space.
Detecting unit is configured to detect operator's motion.
The viewpoint shift unit is configured to based on detected operator's motion with the blinkpunkt that arranges as with reference to being shifted virtual view.
In signal conditioning package, be created on from predetermined virtual view and watch visual point image in the situation of blinkpunkt.Based on operator's motion, with the blinkpunkt that arranges as with reference to being shifted virtual view.Therefore, can obtain to have the image display system based on operator's motion of high operability.
Virtual view is shifted centered by the blinkpunkt that the viewpoint shift unit can be configured to arrange.
Therefore, for example might carry out following intuitive operation: get around from blinkpunkt and watch this point in virtual three dimensional space.
Detecting unit can be configured to detect from operator's photographic images operator's motion.
Therefore, might be with the motion of pin-point accuracy based on operator's image detection operator.
Detecting unit can be configured to detect operator's facial positions.In this case, the viewpoint shift unit can be configured to displacement based on the facial positions virtual view that is shifted.
Like this, based on the displacement of facial positions, virtual view can be shifted.Therefore, might carry out the intuitive operation of watching blinkpunkt by mobile face.
Detecting unit can be configured to detect operator's facial direction.In this case, the viewpoint shift unit can be configured to displacement based on the facial direction virtual view that is shifted.
Like this, based on the displacement of facial direction, virtual view can be shifted.Therefore, might carry out the intuitive operation of watching blinkpunkt by mobile face.
Detecting unit can be configured to detect operator's hand exercise.In this case, the viewpoint shift unit can be configured to based on the hand exercise virtual view that is shifted.
Like this, based on hand exercise, virtual view can be shifted.Therefore, might carry out the intuitive operation that waits of watching blinkpunkt when rotating blinkpunkt by hand on every side.
Signal conditioning package can further comprise the setting unit that is configured to arrange blinkpunkt.
By this structure, for example, might carry out following operation: about the object etc. of concentrating on it, blinkpunkt is set in the Virtual Space.
Signal conditioning package can further comprise the interface unit of the operation that is configured to receive the operator.In this case, setting unit can arrange blinkpunkt based on the operation that interface unit receives.
Like this, can blinkpunkt be set by operator's operation.Therefore, might improve operability.
Setting unit can arrange blinkpunkt based on the operator's who detects motion.
Like this, blinkpunkt can arrange based on operator's motion.Therefore, can improve operability.
Signal conditioning package can further comprise blinkpunkt shift unit and switch unit.
The blinkpunkt shift unit is configured to take virtual view as reference displacement blinkpunkt.
Switch unit is configured to switch between viewpoint shift unit and blinkpunkt shift unit.
By this structure, can switch viewpoint with the some position reference of watching and control.Therefore, can obtain to have the image display system of high operability.
According to embodiments of the invention, a kind of information processing method is provided, comprise that the virtual view that generates from predetermined watches in the situation that the visual point image of the blinkpunkt that arranges in virtual three dimensional space.
Detect operator's motion.
Based on detected operator's motion with the blinkpunkt that arranges as with reference to being shifted virtual view.
According to another embodiment of the present invention, provide a kind of program that computing machine is carried out and generate step, detecting step and shift step.
Generating the step generation watches in the situation that the visual point image of the blinkpunkt that arranges in virtual three dimensional space from predetermined virtual view.
Detecting step detects operator's motion.
Shift step based on the operator's who detects motion with the blinkpunkt that arranges as with reference to being shifted virtual view.
As mentioned above, according to embodiments of the invention, can obtain to have the image display system based on operator's motion of high operability.
To the detailed description of illustrated most preferred embodiment of the present disclosure in the accompanying drawings, these and other purpose of the present disclosure, feature and advantage will become more apparent by following.
Description of drawings
Fig. 1 illustrates the block diagram that comprises at least according to the structure of the information handling system of the signal conditioning package of the disclosure the first embodiment;
Fig. 2 is for being used for explanation according to the schematic diagram of the functional structure example of the signal conditioning package of the first embodiment;
Fig. 3 explains according to the blinkpunkt of the first embodiment and the schematic diagram of virtual view for being used for;
Fig. 4 is for illustrating separately the schematic diagram of the example of the virtual view deterministic process take blinkpunkt as reference according to the first embodiment;
Fig. 5 is for illustrating separately the schematic diagram of the example of the virtual view deterministic process take blinkpunkt as reference according to the first embodiment;
Fig. 6 is the schematic diagram that the example of the virtual view deterministic process take blinkpunkt as reference according to the first embodiment is shown;
Fig. 7 is the process flow diagram that illustrates according to the operation example of the signal conditioning package of the first embodiment;
Fig. 8 is for being used for explaining the schematic diagram of controlling as the viewpoint at center with current location according to the first embodiment;
Fig. 9 is for being used for explanation according to the schematic diagram of the operation of the signal conditioning package of second embodiment of the present disclosure;
Figure 10 is for being used for explanation according to the schematic diagram of the operation of the signal conditioning package of third embodiment of the present disclosure;
Figure 11 is the schematic diagram that illustrates according to another example of image display system of the present disclosure.
Embodiment
According to the embodiment of the present invention, a kind of signal conditioning package is provided, comprising: image generation unit, the virtual view that is configured to generate from predetermined is watched in the situation that the visual point image of the blinkpunkt that arranges in virtual three dimensional space; Detecting unit is configured to detect operator's motion; And the viewpoint shift unit, be configured to based on detected operator motion, with the blinkpunkt that arranges as with reference to virtual view is shifted.
According to another embodiment of the present invention, a kind of information processing method is provided, comprising: generate step, generate from predetermined virtual view and watch in the situation that the visual point image of the blinkpunkt that arranges in virtual three dimensional space; Detecting step detects operator's motion; And shift step, based on detected operator's motion, with the blinkpunkt that arranges as with reference to described virtual view is shifted.
Below, embodiment of the present disclosure is described with reference to the accompanying drawings.
The<the first embodiment 〉
Fig. 1 illustrates the block diagram that comprises at least according to the structure of the information handling system of the signal conditioning package of the disclosure the first embodiment.As signal conditioning package 100, use various computing machines for example game station and PC(personal computer).
Being opposite to the 3D object that is comprised of a plurality of polygons etc. in the 3D Virtual Space according to the signal conditioning package 100 of the present embodiment carries out and presents (レ Application ダ リ Application グ, rendering).This 3D rendering is used in the CAD(computer-aided design (CAD)), in game etc.
Signal conditioning package 100 is provided with the CPU(CPU (central processing unit)) 101, the ROM(ROM (read-only memory)) 102, the RAM(random access memory) 103, IO interface 105 and the bus 104 that connects above-mentioned parts.
Display unit 106, input block 107, storage unit 108, communication unit 109, image pickup units 110, driver element 111 etc. are connected to IO interface 105.
Display unit 106 is to use for example liquid crystal, EL(electroluminescence) or the CRT(cathode-ray tube (CRT)) display device.
Input block 107 is for example controller, sensing equipment, keyboard, touch pad or other operating equipments.In the situation that input block 107 comprises touch pad, touch pad can integrally provide with display unit 106.In the present embodiment, IO interface 105 receives operator's operation by input block 107.
Storage unit 108 is non-volatile memory device, as the HDD(hard disk drive), flash memory and other solid-state memories.
Image pickup units 110 has not shown image pickup control module, image pick-up element and image-pickup optical system.As image pick-up element, adopt the CMOS(complementary metal oxide semiconductor (CMOS)) sensor, the CCD(charge coupled cell) sensor.Image-pickup optical system makes the imaging on the image pickup face of image pick-up element of subject image.The image pickup control module drives image pick-up element, and processes about the picture signal executive signal of image pick-up element output based on the instruction of CPU101.
In the present embodiment, provide opposite camera (facing camera) as image pickup units 110 above display unit 106.By this opposite camera, the operator who plays games with signal conditioning package 100 etc. is taken pictures.
Driver element 111 is can drive as optical record medium, soft (floppy, registered trademark) but the equipment of the removal recording medium 112 of dish, magnetic recording tape and flash memory.On the contrary, storage unit 108 be typically used as pre-configured to the signal conditioning package 100 the main equipment that drives non-removal formula recording medium.
Communication unit 109 is for modulator-demodular unit, router or can be connected to the LAN(LAN (Local Area Network) with other), the WAN(wide area network) etc. other communication facilitiess of communicating of equipment.Communication unit 109 can be carried out wired or wireless communication.Communication unit 109 separates use with signal conditioning package 100 usually.
Fig. 2 is for being used for explanation according to the schematic diagram of the functional structure example of the signal conditioning package 100 of the first embodiment.By making ROM 102 or the software resource of the program in storage unit 108 as being stored in as shown in Figure 1 cooperate each other to realize functional block shown in Figure 2 with hardware resource as CPU 101.
For example, from recording medium 112 with installation to signal conditioning package 100.Alternately, can by communication unit 109 with installation in signal conditioning package 100.
As shown in Figure 2, signal conditioning package 100 has display unit 120, face-detecting unit 121, facial positions computing unit 122, blinkpunkt determining unit 123 and virtual view determining unit 124.
Display unit 120 generates the visual point image in the situation of the blinkpunkt that arranges from predetermined virtual view is watched the 3D Virtual Space.That is, in the present embodiment, based on the 3D coordinate of blinkpunkt, virtual view and the information of directional correlation thereof, generate the visual point image of expression 3D Virtual Space.In the present embodiment, display unit 120 is as image generation unit.
Fig. 3 is for being used for explaining the schematic diagram of above-mentioned blinkpunkt and virtual view.As shown in Figure 3, subject arranged 501 in 3D Virtual Space 500.In this example, in the center of object 501, blinkpunkt 505 is set.Then, generate from virtual view 510 and watch visual point image in the situation of blinkpunkt 505.This visual point image is output to being used as in the display device of display unit 106.
For example, display unit 120 reads data relevant to polygon or texture the 3D map from storage unit 108 grades.Then, generate visual point image as the 2D image, and export the 2D image of this generation.Should be noted that expression 3D Virtual Space 500 image the generation method and to be used for its technology etc. unrestricted.Further, as visual point image, generate 3D rendering, and this 3D rendering can be exported to the display device that can show 3D rendering.
Face-detecting unit 121 receives operator's photographic images from the camera that is used as image pickup units 110, and detects operator's face from photographic images.For example, carry out facial Check processing about the 2D photographic images from Camiera input.For facial Check processing, adopt as based on the algorithm of the study of Viola-Jones.Yet the technology that is used for facial Check processing is unrestricted, can suitably use other technologies, algorithm etc.
In the present embodiment, as the testing result of face-detecting unit 121, the output indication detects the facial rectangular coordinates data of facial position in the coordinate system (at the x-y pixel cell) of photographic images.Facial rectangular coordinates data are output to facial positions computing unit 122.
Facial positions computing unit 122 calculates the centre of gravity place of rectangle according to facial rectangular coordinates data, and according to the center of gravity calculation facial positions.Further, facial positions computing unit 122 is from the size of width and the high computational face of rectangle.The position of the face that facial positions computing unit 122 calculates and the data of size are output to virtual view determining unit 124.
In the present embodiment, face-detecting unit 121 and facial positions computing unit 122 are as the detecting unit that detects operator's motion.As mentioned above, in the present embodiment, detect operator's motion according to operator's photographic images.Further, detect operator's facial positions by operator's photographic images.
Blinkpunkt determining unit 123 is based on determining blinkpunkt 505 with the operation of inputting as the controller of input block 107.In the present embodiment, blinkpunkt determining unit 123 is as the setting unit that blinkpunkt is set.
For example, the pointer of the ad-hoc location on the demonstration 2D image that can select to be shown.Operator's operation control moves the pointer to the desired locations on the 2D image.Then, the operator presses predetermined button etc., thereby is blinkpunkt 505 with the location positioning of pointer.
Alternately, can will be set to watch attentively object as the polygonal object 501 of 3D that is presented on pointer position.For example, the center of watching object attentively or the centre of gravity place that has arranged can be defined as blinkpunkt 505.Alternately, based on as watching object attentively and the shape of the object 501 selected etc., can be in due course with can observe easily object 501 around point be defined as blinkpunkt 505.
In the present embodiment, the 3D coordinate by reference 3D map calculation blinkpunkt 505.Coordinate data is output to virtual view determining unit 124.
Should be noted that as the position that utilizes the pointer of controller function, input 2D coordinate, and based on this 2D coordinate, the 3D coordinate of blinkpunkt 505 can be set.Alternately, in the situation that pointer can show according to the 3D coordinate, as the Position input 3D coordinate of pointer, and this coordinate can be configured to the coordinate of blinkpunkt 505.
Virtual view determining unit 124 is determined virtual view 510 based on position and the size of face take blinkpunkt 505 as reference.In the situation that the position of face and dimensional shift, according to this displacement, virtual view 510 is shifted take blinkpunkt 505 as reference.In other words, in the present embodiment, virtual view determining unit 124 is as the viewpoint shift unit, and this viewpoint shift unit is based on the operator's who detects motion, take the blinkpunkt 505 that arranges for the reference virtual view 510 that is shifted.Virtual view is represented by the 3D coordinate.
Fig. 4 to 6 is for illustrating separately take blinkpunkt 505 as the schematic diagram of reference to the example of definite processing of virtual view 510.As shown in Fig. 4 A, from operator 550 photographic images 551, the position of calculating operation person 550 face 552 and be shifted 553.
For example, when operator 550 moved to face on right side with respect to display screen, operator 550 face 552 was presented at the left side (with reference to Reference numeral 552R) in photographic images 551.That is, in the situation that the left side of facial 552 displacement to the photographic images 551 detects operator 550 motion to the right.
On the other hand, for example, when operator 550 moved to face in left side with respect to display screen, operator 550 face 552 was presented at the right side (with reference to Reference numeral 552L) of photographic images 551.That is, in the situation that the right side of facial 552 displacement to the photographic images 551 detects operator 550 motion to the left.
As shown in Figure 4 B, in the present embodiment, virtual view 510 is arranged on sphere 520 centered by blinkpunkt 505.Based on the displacement of the position of the face 552 that detects, virtual view 510 at sphere 520 superior displacements with around blinkpunkt 505 rotations.Should be noted that in Fig. 4 B, in order to be easy to explain, operator 550 is shown on the position of virtual view 510.
In the present embodiment, according to operator 550 motion to the right, virtual view 510 is displaced to right side (on arrow R indicated direction) about blinkpunkt 505.Further, according to operator 550 motion to the left, virtual view 510 is displaced to left side (on arrow L indicated direction) about blinkpunkt 505.
For example, as shown in Fig. 4 A, in the situation that facial 552 along continuous straight runs motions, virtual view 510 can be being parallel to XY plane, the circumference superior displacement of circle 521 centered by blinkpunkt 505, as shown in Fig. 5 A.Therefore, can carry out when watching blinkpunkt 505 attentively from right and left operation of getting around.
Further, can detect face 552 motion in vertical direction from photographic images 551.In this case, as shown in Fig. 5 B, virtual view 510 can be parallel to XZ plane, the circumference superior displacement of circle 522 centered by blinkpunkt 505.In the situation that face moves upward, virtual view 510 with respect to object 501 from the direction superior displacement.In the situation that face moves downward, virtual view 510 is in contrast direction superior displacement.
As mentioned above, in the present embodiment, detect the motion of facial along continuous straight runs and vertical direction, virtual view 510 moves at sphere 520 superior displacements according to this.Therefore, can obtain having the image display system based on operator 550 motion of high operability.
What should be noted that virtual view 510 is not limited to above-mentioned displacement with the displacement of the motion of vertical direction in the horizontal direction corresponding to face.For example, based on the shape of object 501, size etc., the displacement of the displacement of facial positions and virtual view 510 can suitably be mated.
As shown in Figure 6, in the present embodiment, the face size data are also used in definite processing of virtual view 510.In the present embodiment, adopt following mapping: wherein face size (being the distance between camera and face) variation is reflected in the variation of distance between object 501 and operator 550.
That is, in the present embodiment, determine the radius of sphere 520 based on facial 552 size.In the situation that operator 550 moves to get more close display screen with face 552, the increase of facial 552 sizes detected.Correspondingly, according to the increase of face size, the radius of sphere 520 reduces (in the arrow E indicated direction).That is, on display screen, object 501 amplifies demonstration.
On the other hand, in the situation that operator 550 moves face 552 away from display screen, reducing of facial 552 sizes detected.Correspondingly, according to reducing of facial 552 sizes, the radius of sphere 520 increases (on arrow S indicated direction).That is, on display screen, object 501 dwindles demonstration.As mentioned above, by utilizing the data relevant to face size, can obtain having the image display system based on operator 550 motion of high operability.
In the situation that at first virtual view 510 is set, can carry out default setting to virtual view 510 in advance.Then, can generate from virtual view 510 and watch visual point image in the situation of blinkpunkt 505.After this, based on the position of face 552 and the displacement of size, can carry out the shifting processing of virtual view 510 as above.
Fig. 7 is the process flow diagram that illustrates according to the operation example of the signal conditioning package 100 of the present embodiment.
At first, face-detecting unit 121 is carried out facial Check processing (step 101).Then, determine whether facial 552 be present in photographic images 551 (step 102).In the situation that determine facial 552 be not present in photographic images 551 in (no), the viewpoint of carrying out centered by current location is controlled (step 103).
The viewpoint of describing centered by current location is controlled.The schematic diagram that Fig. 8 controls for the viewpoint that is used for explaining centered by current location.In the present embodiment, based on the operation of the arrow key that utilizes controller etc., carry out the viewpoint centered by current location as described below and control.
As shown in Figure 8, current location being controlled as the viewpoint at center is centered by current virtual view 510 and the control of the viewpoint of shifted target point 515.Impact point 515 point for watching from virtual view 510.As shown in Fig. 8 A, impact point 515 is take virtual view 510 as with reference at (being pointed to by arrow R, L, E and S) sense of rotation and linear direction superior displacement.
Then, as shown in Fig. 8 B, impact point 515 is displaced to another object 511 from object 501.Formation object 511 is presented at visual point image wherein.That is, with current location as the viewpoint at center control make operator 550 to carry out for example to watch the operator around operation.
Should be noted that viewpoint control is not limited to for example following situation: virtual view 510 is fixed in position, and impact point 515 is around its rotation.Can use so-called first person viewpoint to control with changing, wherein be set to reference with operator 550 or by the character on the display screen of operator's 550 operations etc.
In step 102, in the situation that determine that face 552 is present in photographic images 551(and is) in, facial positions computing unit 122 calculates facial position and sizes (step 104).Then, in step 105, determined whether to input the push-botton operation of setting blinkpunkt 505.
In the situation that there is no load button pressing operation (step 105 is "No"), the viewpoint of carrying out centered by current location is controlled (step 103).In the situation that load button pressing operation (step 105 is "Yes") determines whether the 3D of preponderating object (step 106) is arranged on the coordinate of the pointer on display screen.The 3D object of preponderating refers to following object, and this object can be to stand as above in the situation that blinkpunkt 505 is set as the target that the viewpoint of reference centered by blinkpunkt controlled.
That is, in the present embodiment, preset and to stand with the object 501 of blinkpunkt as the viewpoint control at center.In the situation that with pointer alternative 501, carry out viewpoint centered by blinkpunkt and control (from the "Yes" of step 106 to step 107).
Operator 550 can carry out such intuitive operation, namely operator 550 in mobile facial 552 around 501 past of object with the object of observation 501.Therefore, for example, even if in the display device that shows the 2D image, also can observe visual point image when fully experiencing 3d space.
Determining in the situation that the 3D object (step 106 is "No") of not preponderating on the coordinate of pointer is carried out the viewpoint control (step 103) centered by current location.Like this, by the suitable setting object of preponderating, can control and the viewpoint of step 107 is carried out level and smooth switching between controlling in the viewpoint of step 103.For example, can carry out such setting, can be only in the situation that just carry out with the viewpoint control of blinkpunkt as the center as the important object of The Treasure Chest.
Should be noted that if press the blinkpunkt of controller button is set, usually can be in the situation that the object execution of preponderating be set with the viewpoint control of blinkpunkt as the center.
As shown in the process flow diagram of Fig. 7, in the present embodiment, basically carry out and utilize controller, control as the viewpoint at center with current location.Therefore, at first, operator 550 can operate in the motion at first person viewpoint place, the motion of character etc. intuitively.
Yet, in viewpoint is controlled, when for example around 501 past of important object when examining this object, be necessary moving target point 515 when execution rotatablely moves.In other words, need to repeat following operation: regulate the direction of sight line in motion, this may cause complicated operation.
In this case, operator's 550 use controllers are arranged on the blinkpunkt 505 on important object 501.Therefore, as shown in step 107, select to control as the viewpoint at center with blinkpunkt.Thereby operator 550 can walk around object 501 in mobile facial 552, to examine from different perspectives object 501.So, obtained having the image display system of high operability and high-freedom degree.
As mentioned above, in the signal conditioning package 100 according to the present embodiment, be created on from predetermined virtual view 510 and watch visual point image in the situation of blinkpunkt 505.Then, based on operator 550 motion, virtual view 510 is take the blinkpunkt 505 that arranges as with reference to being shifted.Therefore, can obtain motion, image display system that have high operability based on operator 550.
Further, in the present embodiment, virtual view 510 is centered by blinkpunkt 505 and be shifted.Therefore, can carry out following intuitive operation in 3D Virtual Space 500: walk around blinkpunkt 505 and watch object attentively to observe this blinkpunkt and to watch object attentively.
Further, in the present embodiment, owing to detecting operator 550 motion from operator 550 photographic images 551, can detect with pinpoint accuracy operator 550 motion.Further, because position and the size of virtual view 510 based on face 552 is shifted, can carry out following intuitive operation: the operator watches blinkpunkt 505 attentively by mobile face.Further, in the present embodiment, utilize controller, can carry out that blinkpunkt 505 is set to operation on object 501 grades that operator 550 focuses one's attention on.
Should be noted that shown in Figure 8 and control as the viewpoint at center with current location, the blinkpunkt 505 that arranges in controlling as the viewpoint at center with blinkpunkt itself can be used as impact point 515.That is, blinkpunkt 505 can be shifted take virtual view 510 as reference, and in this case, CPU 101 is as the blinkpunkt shift unit.Further, in controlling as the viewpoint at center with blinkpunkt, the impact point 515 of setting in controlling as the viewpoint at center with current location itself can be used as blinkpunkt 505.
In this case, for example, button of switching etc. can be set to controller between two kinds of viewpoint control operations.By the operation of this button, CPU 101 carries out the switching (CPU 101 is as switch unit) between viewpoint shift unit and blinkpunkt shift unit.Therefore, can carry out hand-off process take the point (blinkpunkt 505 and impact point 515) watched as two kinds of viewpoint control operations of reference in the viewpoint control operation.Thereby, the image display system that has obtained having high operability.
The<the second embodiment 〉
With the signal conditioning package of describing according to second embodiment of the present disclosure.In the following description, with omit or simplify with signal conditioning package 100 according to the first embodiment in identical structure and the description of operation.
Fig. 9 is for being used for explanation according to the schematic diagram of the operation of the signal conditioning package of second embodiment of the present disclosure.In the present embodiment, from the image of operator's face 552, detect the direction of operator's face 552.Based on the displacement of the direction of the face 552 that detects, take blinkpunkt as with reference to the displacement virtual view.
For example, from operator's face-image, detect as the position of the part of mouth, nose, eyes and size thereof etc.Based on its position or size etc., detect facial 552 direction.In addition, can adopt various face tracking systems for detection of facial 552 direction etc.Further, for example, can suitably use the sight line analytical technology that to analyze direction of visual lines.
As shown in Figure 9, in the present embodiment, facial 552 direction refers to the direction of the face 552 that obtains by rotation on three direction of principal axis, comprises respectively for axial rolling (roll), pitching (pitch) and departs from (yaw) this three types.Usually, virtual view is shifted according to the sense of rotation of off-axis.
For example, the operator is displaced to left side (by the arrow indicated direction of the off-axis shown in Fig. 9) about object with the direction of face.By this operation, generate the visual point image of seeing from the right side into the object.That is, the generating run person visual point image of getting around along arrow R indicated direction in Fig. 4 B.
On the other hand, if the operator is displaced to the right side with face, the generating run person sees the visual point image into the object from the left side.That is, be created on the visual point image of target rotation on the direction identical with the direction of operator's face.Therefore, obtain image display system with high operability.
Should be noted that the corresponding relation (mapping) between the displacement that facial 552 direction and virtual view can suitably be set.For example, facial 552 rotate on the sense of rotation of the pitch axis shown in Fig. 9.In this case, can generate the visual point image of getting around in vertical direction and watching object.
As in the present embodiment, by the virtual view that is shifted of the displacement based on facial 552 direction, can carry out by mobile facial 552 intuitive operation of watching blinkpunkt.
The<the three embodiment 〉
Figure 10 is for being used for explanation according to the schematic diagram of the operation of the signal conditioning package of third embodiment of the present disclosure.In the present embodiment, detect the motion of operator's hand 652 by operator's photographic images 651.Based on virtual views 610 that is shifted such as the position of operator's hand 652, direction, attitudes.
For example, if the operator moves to right side with respect to display screen (hand being moved to the left side in the photographic images 651 as shown in Figure 10 A) with hand 652, as shown in Figure 10 B, generate the visual point image in the left side (arrow L) of walking around object 601.On the other hand, if the operator moves to left side (right side in photographic images 651) with respect to display screen with hand 652, generate the visual point image on the right side (arrow R) of walking around object 601.
That is, in the present embodiment, can carry out following intuitive operation: such as watch when rotating the object 601 that comprises blinkpunkt 605 by hand object 601 around etc.Should be noted that the corresponding relation between the displacement of displacement that hand 652 can suitably be set and virtual view 610.For example, can carry out following setting: get around and watch object on chirokinesthetic direction.
Further, can generate following visual point image: when hand 652 moves up and down with respect to display screen, get around in vertical direction and the object of observation 601 according to motion.
As mentioned above, virtual view can be based on operator's hand but not facial motion and being shifted.In addition, virtual view can be based on the position of health other parts or whole health, direction, posture etc. and is shifted.
<modified example 〉
Embodiment of the present disclosure is not limited to above-described embodiment, and can carry out various modifications.
For example, Figure 11 is the schematic diagram that illustrates according to another example of image display system of the present disclosure.In the present embodiment, be provided for detecting the detection target object 770 of operator 750 motion to operator 750.For example, the controller 707 that has for detection of the detection target object 770A of operator 750A hand exercise is gripped by hand.Further, at the head of operator 750B, be provided for detecting the detection target object 770B of the position, direction etc. of operator 750B.
In the signal conditioning package 700 of this modified example, based on operator 750 photographic images, to detect target object 770 as detect operator 750 motion with reference to (image input block 701 and motion detection unit 702).Then, its data are output to information integrated unit 703.Further, the information relevant to the blinkpunkt setting operation that uses controller 707 grades etc. also is output to information integrated unit 703 from controller control module 704.Information integrated unit 703 is carried out the shifting processing of virtual view as above, and its information is output to game control unit 705 and display unit 706.
As mentioned above, the detection target object 770 for detection of operator 750 motion can use with supplementary mode.In the present embodiment, will send the luminophor of colorama as detecting target object 770.Therefore, can easily detect operator's motion (motion of luminophor) in the carries out image analysis.Yet, detect target object and be not limited to this, can adopt various objects to it.
In addition, can not detect operator's motion with operator's photographic images.For example, infrared ray sensor, distance measuring sensor etc. are suitably made up mutually to form detecting unit, and detect operator's motion by these sensors.Further, can detect with eye camera of analysis operation person's sight line etc. position, direction of operator's face etc.
In addition, various technology, equipment etc. can suitably be used for detecting the position of operator's face and direction, its sight line, its hand exercise, its body kinematics etc.
Further, in the above-described embodiments, based on determining blinkpunkt with the operation of controller input.Yet, can determine blinkpunkt based on the detected operator's of detecting unit motion.For example, in the situation that operator's sight line detected, can determine blinkpunkt based on time period that whether exists object, sight line to stop etc. on direction of visual lines.Alternately, based on the motion of hand, facial position, its stand-by time section etc., but thereby the calculating operation person wants the object of focal attention to determine blinkpunkt.Alternately, can determine blinkpunkt based on operator's voice.
Further, when being provided with blinkpunkt, blinkpunkt or the object that comprises blinkpunkt can be placed to the center of display screen.Therefore, can fully observe object of concentrating on it etc.Alternately, for example can be so that be placed on the display position that mode that the object at the edge of display screen moves to the center is suitably adjusted blinkpunkt or object.
For example, determine blinkpunkt based on the motion of hand, and according to the motion of hand with the position movement of blinkpunkt to precalculated position such as screen center.Then, by mobile facial positions, can carry out the operation on every side of abundant observation blinkpunkt etc.
In the foregoing description, based on the operation to controller, carry out with current location and control as the viewpoint at center.Yet the viewpoint centered by current location is controlled and can be carried out based on operator's motion.That is, control as the viewpoint at center with blinkpunkt and control as the viewpoint at center with current location and both all can carry out based on operator's motion.For example, but operator's action button etc. are suitably switched the pattern that viewpoint is controlled, and the viewpoint control centered by the switching blinkpunkt and the viewpoint centered by current location are controlled.
Can control according to operator's motion the walk rate of virtual view.For example, when the displacement based on facial positions was shifted virtual view, the walk rate of virtual view can be adjusted according to facial direction.For example, in the great situation of object, only by the change of facial positions, may limit the shift range of virtual view, this can cause being difficult to observing around object whole.In this case, change facial direction, and improve the walk rate of virtual view.Therefore, according to the change of facial positions, virtual view can be at internal shift on a large scale, this make can the object of observation whole around.
For example, icon etc. can be presented on display screen, and it is to control or the control of the viewpoint centered by blinkpunkt as the viewpoint at center by current location that its prompting operation person's current view point is controlled.For example, be to control as the viewpoint at center with current location in the situation that current view point is controlled, the icon as shown in Fig. 8 A is presented at the right corner of display screen etc.Further, in the situation that control as the viewpoint at center with blinkpunkt, icon as shown in Figure 6 is presented on display screen.Therefore, the operator has been improved operability.
Further, for example, in the situation that select to control as the viewpoint at center with blinkpunkt, can show together with icon as shown in Figure 6 and be used for explaining text or the image (for example, showing as right and left character at arrow side) of corresponding relation between the arrow of facial positions etc. and icon.
The combination of the above embodiments and modified example can be used as embodiment of the present disclosure.
It should be noted that the disclosure can adopt following structure.
(1) a kind of signal conditioning package comprises:
Image generation unit, the virtual view that is configured to generate from predetermined is watched in the situation that the visual point image of the blinkpunkt that arranges in virtual three dimensional space;
Detecting unit is configured to detect operator's motion; And
The viewpoint shift unit, be configured to based on detected operator motion, with the blinkpunkt that arranges as with reference to virtual view is shifted.
(2) according to the signal conditioning package of project (1), wherein
Described virtual view is shifted centered by the blinkpunkt that described viewpoint shift unit is configured to arrange.
(3) according to the signal conditioning package of project (1) or (2), wherein
Described detecting unit is configured to from the motion of the operator's who takes image detection operator.
(4) according to the signal conditioning package of project (1) to (3) any one, wherein
Described detecting unit is configured to detect the position of operator's face, and
Described viewpoint shift unit is configured to based on the displacement of the position of described face, described virtual view is shifted.
(5) according to the signal conditioning package of project (1) to (4) any one, wherein
Described detecting unit is configured to detect the direction of operator's face, and
Described viewpoint shift unit is configured to based on the displacement of the direction of described face, described virtual view is shifted.
(6) according to the signal conditioning package of project (1) to (5) any one, wherein
Described detecting unit is configured to detect the motion of operator's hand, and
Described viewpoint shift unit is configured to based on the motion of described hand, described virtual view is shifted.
(7) according to the signal conditioning package of project (1) to (6) any one, further comprise setting unit, be configured to arrange described blinkpunkt.
(8) according to the signal conditioning package of project (7), further comprise
Interface unit is configured to receive operator's operation, wherein
Described setting unit arranges described blinkpunkt based on the operation that described interface unit receives.
(9) according to the signal conditioning package of project (7) or (8), wherein
Described setting unit arranges described blinkpunkt based on the operator's who detects motion.
(10) according to the signal conditioning package of project (1) to (9) any one, further comprise:
The blinkpunkt shift unit is configured to described blinkpunkt is shifted as reference with described virtual view; And
Switch unit is configured to switch between described viewpoint shift unit and described blinkpunkt shift unit.
(11) a kind of information processing method comprises:
Generation is watched in the situation that the visual point image of the blinkpunkt that arranges in virtual three dimensional space from predetermined virtual view;
Detect operator's motion; And
Based on detected operator's motion, with the blinkpunkt that arranges as with reference to virtual view is shifted.
(12) a kind of program makes computer executed step:
Generation is watched in the situation that the visual point image of the blinkpunkt that arranges in virtual three dimensional space from predetermined virtual view;
Detect operator's motion; And
Based on detected operator's motion, with the blinkpunkt that arranges as with reference to virtual view is shifted.
(13) according to the program of project (12), wherein
The described shift step described virtual view that is shifted centered by the blinkpunkt that arranges.
(14) according to the program of project (12) or (13), wherein
Described detecting step is from the motion of the operator's of shooting image detection operator.
(15) according to the program of project (12) to (14) any one, wherein
Described detecting step detects the position of operator's face, and
Described shift step is based on the displacement of the position of the described face described virtual view that is shifted.
(16) according to the program of project (12) to (15) any one, wherein
Described detecting step detects the direction of operator's face, and
Described shift step is based on the displacement of the direction of the described face described virtual view that is shifted.
(17) according to the program of project (12) to (16) any one, wherein
Described detecting step detects the motion of operator's hand, and
Described shift step is based on the motion of the described hand described virtual view that is shifted.
(18) according to the program of project (12) to (17) any one, further make computer executed step
Blinkpunkt is set.
(19) according to the program of project (18), further make computer executed step
Receive operator's operation, wherein
Setting steps arranges blinkpunkt based on the operation that receives.
(20) according to the program of project (18) or (19), wherein
Setting steps arranges blinkpunkt based on detected operator's motion.
The present invention comprise to the Japanese priority patent application JP2011-165129 that submitted Japan Office on July 28th, 2011 in relevant subject content disclosed, its full content is herein incorporated as a reference.
It will be appreciated by those skilled in the art that according to design requirement and other factors various improvement, combination, sub-portfolio and change to occur, as long as it is in the scope of claim or its equivalent.

Claims (22)

1. signal conditioning package comprises:
Image generation unit, the virtual view that is configured to generate from predetermined is watched in the situation that the visual point image of the blinkpunkt that arranges in virtual three dimensional space;
Detecting unit is configured to detect operator's motion; And
The viewpoint shift unit, be configured to based on detected operator motion, with the blinkpunkt that arranges as with reference to virtual view is shifted.
2. signal conditioning package according to claim 1, wherein
Described virtual view is shifted centered by the blinkpunkt that described viewpoint shift unit is configured to arrange.
3. signal conditioning package according to claim 2, wherein said viewpoint shift unit is configured at the described virtual view of sphere superior displacement take set blinkpunkt as sphere centre.
4. signal conditioning package according to claim 3, wherein said detecting unit is configured to detect the size of described operator's face, and the radius of described sphere according to the size of face reduce increase, the radius of described sphere reduces according to the increase of the size of face.
5. signal conditioning package according to claim 1, wherein
Described detecting unit is configured to from the motion of the operator's who takes image detection operator.
6. signal conditioning package according to claim 1, wherein
Described detecting unit is configured to detect the position of operator's face, and
Described viewpoint shift unit is configured to based on the displacement of the position of described face, described virtual view is shifted.
7. signal conditioning package according to claim 1, wherein
Described detecting unit is configured to detect the direction of operator's face, and
Described viewpoint shift unit is configured to based on the displacement of the direction of described face, described virtual view is shifted.
8. signal conditioning package according to claim 1, wherein
Described detecting unit is configured to detect the motion of operator's hand, and
Described viewpoint shift unit is configured to based on the motion of described hand, described virtual view is shifted.
9. signal conditioning package according to claim 1, further comprise
Setting unit is configured to arrange described blinkpunkt.
10. signal conditioning package according to claim 9, further comprise
Interface unit is configured to receive operator's operation, wherein
Described setting unit arranges described blinkpunkt based on the operation that described interface unit receives.
11. signal conditioning package according to claim 9, wherein
Described setting unit arranges described blinkpunkt based on the operator's who detects motion.
12. signal conditioning package according to claim 1 further comprises:
The blinkpunkt shift unit is configured to described blinkpunkt is shifted as reference with described virtual view; And
Switch unit is configured to switch between described viewpoint shift unit and described blinkpunkt shift unit.
13. an information processing method comprises:
Generate step, generate from predetermined virtual view and watch in the situation that the visual point image of the blinkpunkt that arranges in virtual three dimensional space;
Detecting step detects operator's motion; And
Shift step, based on detected operator's motion, with the blinkpunkt that arranges as with reference to described virtual view is shifted.
14. information processing method according to claim 13, wherein
The described shift step described virtual view that is shifted centered by the blinkpunkt that arranges.
15. information processing method according to claim 13, wherein
Described detecting step is from the motion of the operator's of shooting image detection operator.
16. information processing method according to claim 13, wherein
Described detecting step detects the position of operator's face, and
Described shift step is based on the displacement of the position of the described face described virtual view that is shifted.
17. information processing method according to claim 13, wherein
Described detecting step detects the direction of operator's face, and
Described shift step is based on the displacement of the direction of the described face described virtual view that is shifted.
18. information processing method according to claim 13, wherein
Described detecting step detects the motion of operator's hand, and
Described shift step is based on the motion of the described hand described virtual view that is shifted.
19. information processing method according to claim 13 also comprises following setting steps:
Described blinkpunkt is set.
20. information processing method according to claim 19 is further comprising the steps of:
Receive operator's operation, wherein
Described setting steps arranges described blinkpunkt based on the operation that receives.
21. information processing method according to claim 19, wherein
Described setting steps arranges described blinkpunkt based on detected operator's motion.
22. a program makes computer executed step:
Generation is watched in the situation that the visual point image of the blinkpunkt that arranges in virtual three dimensional space from predetermined virtual view;
Detect operator's motion; And
Based on detected operator's motion, with the blinkpunkt that arranges as with reference to described virtual view is shifted.
CN201210254460.8A 2011-07-28 2012-07-20 Information processor and information processing method Active CN103150009B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011-165129 2011-07-28
JP2011165129A JP5839220B2 (en) 2011-07-28 2011-07-28 Information processing apparatus, information processing method, and program

Publications (2)

Publication Number Publication Date
CN103150009A true CN103150009A (en) 2013-06-12
CN103150009B CN103150009B (en) 2017-03-01

Family

ID=47596842

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210254460.8A Active CN103150009B (en) 2011-07-28 2012-07-20 Information processor and information processing method

Country Status (3)

Country Link
US (1) US9342925B2 (en)
JP (1) JP5839220B2 (en)
CN (1) CN103150009B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108028964A (en) * 2015-09-14 2018-05-11 索尼公司 Information processor and information processing method
CN110651304A (en) * 2017-05-23 2020-01-03 索尼公司 Information processing apparatus, information processing method, and program
CN110691230A (en) * 2018-07-04 2020-01-14 佳能株式会社 Information processing apparatus, control method thereof, and computer-readable storage medium
CN110891168A (en) * 2018-09-07 2020-03-17 佳能株式会社 Information processing apparatus, information processing method, and storage medium
TWI736214B (en) * 2019-04-17 2021-08-11 日商樂天集團股份有限公司 Display control device, display control method, program and non-temporary computer readable information recording medium

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6499583B2 (en) * 2013-09-24 2019-04-10 シャープ株式会社 Image processing apparatus and image display apparatus
US10099134B1 (en) * 2014-12-16 2018-10-16 Kabam, Inc. System and method to better engage passive users of a virtual space by providing panoramic point of views in real time
JP6411244B2 (en) * 2015-03-05 2018-10-24 日本電信電話株式会社 Video presentation method and video presentation device
WO2016157523A1 (en) * 2015-04-03 2016-10-06 株式会社SR laboratories Display terminal and information recording medium
KR101807513B1 (en) * 2015-05-13 2017-12-12 한국전자통신연구원 The analysis apparatus and method of user intention using video information in three dimensional space
JP6250592B2 (en) 2015-06-02 2017-12-20 株式会社ソニー・インタラクティブエンタテインメント Head mounted display, information processing apparatus, display control method, and program
JP6620163B2 (en) * 2015-10-15 2019-12-11 株式会社ソニー・インタラクティブエンタテインメント Image processing apparatus, image processing method, and program
JP2017041229A (en) * 2016-06-08 2017-02-23 株式会社コロプラ Method and program for controlling head-mounted display system
CN110892455A (en) * 2017-07-14 2020-03-17 索尼公司 Image processing apparatus, image processing method for image processing apparatus, and program
US10567649B2 (en) * 2017-07-31 2020-02-18 Facebook, Inc. Parallax viewer system for 3D content
JP7335335B2 (en) * 2019-06-28 2023-08-29 富士フイルム株式会社 Information processing device, information processing method, and program
JP7287172B2 (en) * 2019-08-06 2023-06-06 凸版印刷株式会社 Display control device, display control method, and program
CN112596840A (en) * 2020-12-24 2021-04-02 北京城市网邻信息技术有限公司 Information processing method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6226008B1 (en) * 1997-09-04 2001-05-01 Kabushiki Kaisha Sega Enterprises Image processing device
WO2007062478A1 (en) * 2005-11-30 2007-06-07 Seeing Machines Pty Ltd Visual tracking of eye glasses in visual head and eye tracking systems
CN100542645C (en) * 2004-03-31 2009-09-23 世嘉股份有限公司 Video generation device and method for displaying image
CN101866214A (en) * 2009-04-14 2010-10-20 索尼公司 Messaging device, information processing method and message processing program

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3521187B2 (en) 1996-10-18 2004-04-19 株式会社東芝 Solid-state imaging device
JPH11235466A (en) 1997-12-18 1999-08-31 Sega Enterp Ltd Computer game device
JP3602360B2 (en) 1999-02-23 2004-12-15 三菱電機株式会社 Three-dimensional landscape display device and display method
WO2002069276A1 (en) * 2001-02-23 2002-09-06 Fujitsu Limited Display control device, information terminal device equipped with the display control device, and view point position control device
JP4007899B2 (en) * 2002-11-07 2007-11-14 オリンパス株式会社 Motion detection device
JP4242318B2 (en) 2004-04-26 2009-03-25 任天堂株式会社 3D image generation apparatus and 3D image generation program
EP2457627B1 (en) 2008-06-30 2014-06-25 Sony Computer Entertainment Inc. Portable type game device and method for controlling portable type game device
JP2010122879A (en) 2008-11-19 2010-06-03 Sony Ericsson Mobile Communications Ab Terminal device, display control method, and display control program
US8564502B2 (en) 2009-04-02 2013-10-22 GM Global Technology Operations LLC Distortion and perspective correction of vector projection display
US8704879B1 (en) * 2010-08-31 2014-04-22 Nintendo Co., Ltd. Eye tracking enabling 3D viewing on conventional 2D display

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6226008B1 (en) * 1997-09-04 2001-05-01 Kabushiki Kaisha Sega Enterprises Image processing device
CN100542645C (en) * 2004-03-31 2009-09-23 世嘉股份有限公司 Video generation device and method for displaying image
WO2007062478A1 (en) * 2005-11-30 2007-06-07 Seeing Machines Pty Ltd Visual tracking of eye glasses in visual head and eye tracking systems
CN101866214A (en) * 2009-04-14 2010-10-20 索尼公司 Messaging device, information processing method and message processing program

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108028964A (en) * 2015-09-14 2018-05-11 索尼公司 Information processor and information processing method
CN110651304A (en) * 2017-05-23 2020-01-03 索尼公司 Information processing apparatus, information processing method, and program
CN110691230A (en) * 2018-07-04 2020-01-14 佳能株式会社 Information processing apparatus, control method thereof, and computer-readable storage medium
CN110891168A (en) * 2018-09-07 2020-03-17 佳能株式会社 Information processing apparatus, information processing method, and storage medium
US11354849B2 (en) 2018-09-07 2022-06-07 Canon Kabushiki Kaisha Information processing apparatus, information processing method and storage medium
TWI736214B (en) * 2019-04-17 2021-08-11 日商樂天集團股份有限公司 Display control device, display control method, program and non-temporary computer readable information recording medium

Also Published As

Publication number Publication date
JP2013029958A (en) 2013-02-07
JP5839220B2 (en) 2016-01-06
CN103150009B (en) 2017-03-01
US20130027393A1 (en) 2013-01-31
US9342925B2 (en) 2016-05-17

Similar Documents

Publication Publication Date Title
CN103150009A (en) Information processing apparatus, information processing method, and program
US9830004B2 (en) Display control apparatus, display control method, and display control program
US10001844B2 (en) Information processing apparatus information processing method and storage medium
CN103139463B (en) Method, system and mobile device for augmenting reality
CN108469899B (en) Method of identifying an aiming point or area in a viewing space of a wearable display device
CN109040600B (en) Mobile device, system and method for shooting and browsing panoramic scene
TWI540461B (en) Gesture input method and system
CN102955568B (en) Input unit
US10037614B2 (en) Minimizing variations in camera height to estimate distance to objects
KR101340797B1 (en) Portable Apparatus and Method for Displaying 3D Object
JP5709440B2 (en) Information processing apparatus and information processing method
US10203837B2 (en) Multi-depth-interval refocusing method and apparatus and electronic device
CN106808473A (en) Information processor and information processing method
CN104508600A (en) Three-dimensional user-interface device, and three-dimensional operation method
JPWO2011080882A1 (en) Action space presentation device, action space presentation method, and program
JPWO2014141504A1 (en) 3D user interface device and 3D operation processing method
US9727229B2 (en) Stereoscopic display device, method for accepting instruction, and non-transitory computer-readable medium for recording program
US20120268493A1 (en) Information processing system for augmented reality
CN103608761A (en) Input device, input method and recording medium
US9122346B2 (en) Methods for input-output calibration and image rendering
US20130187852A1 (en) Three-dimensional image processing apparatus, three-dimensional image processing method, and program
CN105867597B (en) 3D interaction method and 3D display equipment
KR20120055434A (en) Display system and display method thereof
WO2014033722A1 (en) Computer vision stereoscopic tracking of a hand
US9465483B2 (en) Methods for input-output calibration and image rendering

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant