CN105334951A - Three-dimensional image control method and apparatus and electronic device - Google Patents

Three-dimensional image control method and apparatus and electronic device Download PDF

Info

Publication number
CN105334951A
CN105334951A CN201410301004.3A CN201410301004A CN105334951A CN 105334951 A CN105334951 A CN 105334951A CN 201410301004 A CN201410301004 A CN 201410301004A CN 105334951 A CN105334951 A CN 105334951A
Authority
CN
China
Prior art keywords
user
gestures
images
image
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410301004.3A
Other languages
Chinese (zh)
Other versions
CN105334951B (en
Inventor
李凡智
安岩
杨大业
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201410301004.3A priority Critical patent/CN105334951B/en
Priority claimed from CN201410301004.3A external-priority patent/CN105334951B/en
Publication of CN105334951A publication Critical patent/CN105334951A/en
Application granted granted Critical
Publication of CN105334951B publication Critical patent/CN105334951B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

Embodiments of the invention provide a three-dimensional image control method and apparatus and an electronic device. The method comprises: playing a three-dimensional projection image; acquiring a user gesture image through a first image acquisition apparatus; identifying a control instruction corresponding to the user gesture image; and controlling the three-dimensional projection image to be subjected to a corresponding operation according to the control instruction. An embodiment of the invention provides a mechanism capable of realizing interaction between users and three-dimensional images, so that the control on the displayed three-dimensional images is realized.

Description

A kind of 3-D view control method, device and electronic equipment
Technical field
The present invention relates to technical field of image processing, more particularly, relate to a kind of 3-D view control method, device and electronic equipment.
Background technology
Three-dimensional image projection (as line holographic projections) also claims virtual image technology, is to utilize interference and diffraction principle record and a kind of technology of the real 3-D view of reconstructed object.
Three-dimensional image projection mainly realizes based on tripleplane's equipment, current tripleplane equipment mainly contains two kinds of structures, a kind of structure comprises: electronic equipment is (as panel computer, mobile phone, notebook etc., be mainly used in playing tripleplane's image), light source (can be the camera of electronic equipment) and mirror plane, the cardinal principle that this structure realizes tripleplane is, tripleplane's image that electronic equipment is play sends corresponding light signal by light source, light signal, after the process of mirror plane, generates tripleplane's image; Another kind of structure comprises: electronic equipment, light source and diffraction film (diffraction film is generally laid in electronic equipment surface), the cardinal principle that this structure realizes tripleplane is, tripleplane's image that electronic equipment is play sends corresponding light signal by light source, light signal, after the process of diffraction film, generates tripleplane's image.
The present inventor studies discovery: current tripleplane's mode only can realize the three-dimensional display of image, and cannot realize the control of user to shown 3-D view, lacks the interaction mechanism between user and 3-D view.
Summary of the invention
In view of this, the embodiment of the present invention provides a kind of 3-D view control method, device and electronic equipment, cannot realize the problem of user to the control of shown 3-D view to solve prior art.
For achieving the above object, the embodiment of the present invention provides following technical scheme:
A kind of 3-D view control method, be applied to the electronic equipment being provided with the first image collecting device, described electronic equipment is for playing tripleplane's image, and described method comprises:
Play tripleplane's image;
By described first image acquisition device user images of gestures;
Identify the steering order that described user's images of gestures is corresponding;
Control described tripleplane image according to described steering order and carry out corresponding operating.
Wherein, describedly control described tripleplane image according to described steering order and carry out corresponding operating and comprise:
Determine the 3-D view region corresponding to gesture in described user's images of gestures;
Control described 3-D view region according to described steering order and carry out corresponding operating.
Wherein, describedly determine that the 3-D view region in described user's images of gestures corresponding to gesture comprises:
Determine the 3D coordinate in described user's images of gestures corresponding to gesture, a described 3D coordinate is the coordinate in the three-dimensional system of coordinate that is set to initial point with described first image collector and sets up;
According to the described three-dimensional system of coordinate preset and the corresponding relation at 3D interface carrying out three-dimensional image projection, determine the 2nd 3D coordinate that a described 3D coordinate is corresponding, described 2nd 3D coordinate is the coordinate in described 3D interface;
Determine the 3-D view region corresponding to described 2nd 3D coordinate, the 3-D view region corresponding to described gesture.
Wherein, the steering order that described identification described user images of gestures is corresponding comprises:
Obtain the image gathered with symmetrically arranged second image collecting device of described first image collecting device;
By the image that described second image collecting device gathers, and user's images of gestures of described first image acquisition device, determine the relative position of the mirror plane that the screen of described electronic equipment processes with the light signal corresponding to the tripleplane's image play described electronic equipment;
By described relative position, the user's images of gestures gathered by described first image collecting device is corrected;
Identify the user gesture corresponding with the user's images of gestures after correction, determine the steering order corresponding with identified user's gesture.
Wherein, the steering order that described identification described user images of gestures is corresponding comprises:
Identify user's gesture that described user's images of gestures is corresponding;
According to the corresponding relation of predetermined user's gesture and steering order, determine the steering order corresponding with identified user's gesture.
Wherein, describedly to be comprised by described first image acquisition device user images of gestures:
Obtain the first user images of gestures that described first image collecting device gathers;
Reduce described first user images of gestures according to predetermined ratio, form second user's images of gestures;
Adopt local sharpness complexity LDC algorithm to process described second user's images of gestures, cut out the 3rd user's images of gestures;
Determine that described 3rd user's images of gestures is the final user's images of gestures gathered.
Wherein, describedly to be comprised by described first image acquisition device user images of gestures:
Obtain the first user images of gestures that described first image collecting device gathers;
By the region be selected in moving window determination first user images of gestures, form the four user images of gestures corresponding with the described region be selected;
Adopt local sharpness complexity LDC algorithm to process described 4th user's images of gestures, cut out the 5th user's images of gestures;
Determine that described 5th user's images of gestures is the final user's images of gestures gathered.
Wherein, described method also comprises:
By described first image acquisition device user facial image;
The position of user's face relative to the screen of described electronic equipment is calculated according to face recognition algorithms;
By the projection pattern of 3-D view described in described position control.
The embodiment of the present invention also provides a kind of 3-D view control device, is applied to the electronic equipment being provided with the first image collecting device, and described electronic equipment is for playing tripleplane's image, and described device comprises:
Playing module, for playing tripleplane's image;
Acquisition module, for by described first image acquisition device user images of gestures;
Identification module, for identifying the steering order that described user's images of gestures is corresponding;
Control module, carries out corresponding operating for controlling described tripleplane image according to described steering order.
Wherein, described control module comprises:
Area determination unit, for determining the 3-D view region in described user's images of gestures corresponding to gesture;
Operation control unit, carries out corresponding operating for controlling described 3-D view region according to described steering order.
Wherein, described area determination unit comprises:
One 3D coordinate determination subelement, for determining the 3D coordinate in described user's images of gestures corresponding to gesture, a described 3D coordinate is the coordinate in the three-dimensional system of coordinate that is set to initial point with described first image collector and sets up;
2nd 3D coordinate determination subelement, for according to the described three-dimensional system of coordinate preset and the corresponding relation at 3D interface carrying out three-dimensional image projection, determine the 2nd 3D coordinate that a described 3D coordinate is corresponding, described 2nd 3D coordinate is the coordinate in described 3D interface;
Area results determination subelement, for determining the 3-D view region corresponding to described 2nd 3D coordinate, the 3-D view region corresponding to described gesture.
Wherein, described identification module comprises:
Image acquisition unit, for obtaining the image gathered with symmetrically arranged second image collecting device of described first image collecting device;
Relative position determining unit, for the image gathered by described second image collecting device, and user's images of gestures of described first image acquisition device, determine the relative position of the mirror plane that the screen of described electronic equipment processes with the light signal corresponding to the tripleplane's image play described electronic equipment;
Correcting unit, for correcting the user's images of gestures gathered by described first image collecting device by described relative position;
Recognition unit, for identify with correct after the corresponding user's gesture of user's images of gestures, determine the steering order corresponding with identified user's gesture.
Wherein, described identification module comprises:
Gesture identification unit, for identifying user's gesture that described user's images of gestures is corresponding;
Instruction-determining unit, for the corresponding relation according to predetermined user's gesture and steering order, determines the steering order corresponding with identified user's gesture.
Wherein, described acquisition module comprises:
First acquiring unit, for obtaining the first user images of gestures that described first image collecting device gathers;
Reducing unit, for reducing described first user images of gestures according to predetermined ratio, forms second user's images of gestures;
First processing unit, for adopting local sharpness complexity LDC algorithm to process described second user's images of gestures, cuts out the 3rd user's images of gestures;
First final determining unit, for determining that described 3rd user's images of gestures is the final user's images of gestures gathered.
Wherein, described acquisition module comprises:
Second acquisition unit, for obtaining the first user images of gestures that described first image collecting device gathers;
Choosing unit, for the region by being selected in moving window determination first user images of gestures, forming the four user images of gestures corresponding with the described region be selected;
Second processing unit, for adopting local sharpness complexity LDC algorithm to process described 4th user's images of gestures, cuts out the 5th user's images of gestures;
Second final determining unit, for determining that described 5th user's images of gestures is the final user's images of gestures gathered.
Wherein, described device also comprises:
Man face image acquiring module, for passing through described first image acquisition device user facial image;
Position computation module, for calculating the position of user's face relative to the screen of described electronic equipment according to face recognition algorithms;
Projection control module, for the projection pattern by tripleplane's image described in described position control.
The embodiment of the present invention also provides a kind of electronic equipment, comprises 3-D view control device described above.
Based on technique scheme, the display control method that the embodiment of the present invention provides, when tripleplane's image play by electronic equipment, by being arranged at the first image acquisition device user images of gestures on electronic equipment, identify the steering order that described user's images of gestures is corresponding, and then control described tripleplane image according to described steering order and carry out corresponding operating, achieve the control to shown 3-D view.The 3-D view control method that the embodiment of the present invention provides provides and a kind ofly realizes user and the mutual mechanism of 3-D view, achieves the control to shown 3-D view.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment or description of the prior art below, apparently, accompanying drawing in the following describes is only embodiments of the invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to the accompanying drawing provided.
The process flow diagram of the 3-D view control method that Fig. 1 provides for the embodiment of the present invention;
Another process flow diagram of the 3-D view control method that Fig. 2 provides for the embodiment of the present invention;
A process flow diagram again of the 3-D view control method that Fig. 3 provides for the embodiment of the present invention;
The another process flow diagram of the 3-D view control method that Fig. 4 provides for the embodiment of the present invention;
The schematic diagram of the 3D coordinate in the three-dimensional system of coordinate that Fig. 5 provides for the embodiment of the present invention corresponding to gesture;
The method flow diagram of the correcting user images of gestures that Fig. 6 provides for the embodiment of the present invention;
The method flow diagram of the collection user images of gestures that Fig. 7 provides for the embodiment of the present invention;
The image flame detection process schematic diagram that Fig. 8 provides for the embodiment of the present invention;
The other method process flow diagram of the collection user images of gestures that Fig. 9 provides for the embodiment of the present invention;
Another schematic diagram of the image flame detection process that Figure 10 provides for the embodiment of the present invention;
The method flow diagram of the projection pattern of the adjustment 3-D view that Figure 11 provides for the embodiment of the present invention;
The structured flowchart of the 3-D view control device that Figure 12 provides for the embodiment of the present invention;
The structured flowchart of the control module that Figure 13 provides for the embodiment of the present invention;
The structured flowchart of the area determination unit that Figure 14 provides for the embodiment of the present invention;
The structured flowchart of the identification module that Figure 15 provides for the embodiment of the present invention;
Another structured flowchart of the identification module that Figure 16 provides for the embodiment of the present invention;
The structured flowchart of the acquisition module that Figure 17 provides for the embodiment of the present invention;
Another structured flowchart of the acquisition module that Figure 18 provides for the embodiment of the present invention;
Another structured flowchart of the 3-D view control device that Figure 19 provides for the embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
The process flow diagram of the 3-D view control method that Fig. 1 provides for the embodiment of the present invention, the method can be applicable to the electronic equipment playing tripleplane's image, this electronic equipment can be provided with at least one image collecting device (as first-class in made a video recording), the image collecting device installed comprises the first image collecting device, with reference to Fig. 1, the method can comprise:
Step S100, broadcasting tripleplane image;
When carrying out three-dimensional image projection, the light signal corresponding to tripleplane's image that electronic equipment is play, will through mirror plane or the process of diffraction film, generating three-dimensional figures picture.
Step S110, by described first image acquisition device user images of gestures;
Optionally, the first image collecting device can be arranged at the edge (upper side edge, lower side, limit, left side, right edge etc. as electronic equipment) of electronic equipment, the gesture operation of user can be detected, carries out the collection of user's images of gestures; Obviously, the embodiment of the present invention also can not be restricted for the setting position of the first image collecting device, as long as the first image collecting device can detect the gesture operation of user, collects user's images of gestures.
When the light signal corresponding to tripleplane's image that electronic equipment is play is through mirror plane process, the set-up mode of the first image collecting device is preferably towards certain face of catoptron, so that user's finger, when mirror plane or electronic equipment screen action, collects user's images of gestures.
Step S120, identify the steering order that described user's images of gestures is corresponding;
Optionally, the corresponding relation of the predeterminable user's gesture of the embodiment of the present invention and steering order, thus when electronic equipment collects user's images of gestures, user's gesture that user's images of gestures described in identifiable design is corresponding, by this corresponding relation, determine the steering order that the gesture corresponding with gathered user's images of gestures is corresponding.
Optionally, the instruction of embodiment of the present invention install beforehand various control, as reduce the 3-D view that projects, amplify the 3-D view projected, rotate the 3-D view etc. projected, preset concrete steering order can by user's sets itself, can change according to practical situations, the embodiment of the present invention is not restricted.Meanwhile, the user gesture corresponding with steering order also can by user's sets itself, and also can change according to practical situations, the embodiment of the present invention is not restricted.
Step S130, control described tripleplane image according to described steering order and carry out corresponding operating.
After determining steering order corresponding to user's images of gestures, the embodiment of the present invention can perform determined steering order, controls tripleplane's image and performs the operation corresponding to determined steering order.
The display control method that the embodiment of the present invention provides, when tripleplane's image play by electronic equipment, by being arranged at the first image acquisition device user images of gestures on electronic equipment, identify the steering order that described user's images of gestures is corresponding, and then control described tripleplane image according to described steering order and carry out corresponding operating, achieve the control to shown 3-D view.The display control method that the embodiment of the present invention provides provides and a kind ofly realizes user and the mutual mechanism of 3-D view, achieves the control to shown 3-D view.
Optionally, the embodiment of the present invention, when controlling described tripleplane image according to described steering order and carrying out corresponding operating, can being carried out overall control to tripleplane's image, as controlled tripleplane's image integral-rotation, reducing; Also can control certain 3-D view region of tripleplane's image, hand as controlled tripleplane's image lifts, foot lifts, optionally, hand lifts the steering order lifted with foot can be the same, is only that the 3-D view region that steering order controls is different.
When overall control is carried out to tripleplane's image, the embodiment of the present invention is by the first image acquisition device user images of gestures, after determining the steering order that user's images of gestures is corresponding, tripleplane's image can be controlled and carry out the operation corresponding with this steering order, realize the control to shown 3-D view.
With carry out tripleplane's image compared with overall situation about controlling, to the situation that a certain 3-D view region of tripleplane's image controls, also needing the location of carrying out 3-D view region.Corresponding, Fig. 2 shows another process flow diagram of the 3-D view control method that the embodiment of the present invention provides, and with reference to Fig. 2, the method can comprise:
Step S200, broadcasting tripleplane image;
Step S210, by described first image acquisition device user images of gestures;
Step S220, identify the steering order that described user's images of gestures is corresponding;
Step S230, determine the 3-D view region corresponding to gesture in described user's images of gestures;
3-D view region can be certain part of tripleplane's image; as the people that tripleplane's image is a dancing; then 3-D view region can be the hand of people; foot; the positions such as head; this part citing is only for ease of understanding the relation of 3-D view region and tripleplane's image, and it should not become the restriction of scope.
In embodiments of the present invention, the 3-D view region corresponding to gesture can be understood as the 3-D view region pointed by gesture.
Optionally, the embodiment of the present invention is by carrying out the 3D coordinate determination 3-D view region in the 3D interface of three-dimensional image projection.
Step S240, control described 3-D view region according to described steering order and carry out corresponding operating.
Optionally, the scheme that the embodiment of the present invention controls 3-D view region, after passing through the first image acquisition device to user's images of gestures, except identifying the steering order that user's images of gestures is corresponding, also need the 3-D view region determined in user's images of gestures corresponding to gesture, thus just can control the described 3-D view region execution operation corresponding to determined steering order.
Optionally, for the different 3-D view regions of tripleplane's image, can have identical steering order, steering order can be only corresponding with the gesture of user, and have nothing to do with the 3-D view region corresponding to gesture in user's images of gestures; As the people that tripleplane's image is a dancing, then the hand of people and foot may correspond to and identical lift steering order, if the 3-D view region of lifting in the user's images of gestures corresponding to steering order corresponding to gesture is foot, then can perform foot and lift steering order, thus control foot is lifted, if the 3-D view region of lifting in the user's images of gestures corresponding to steering order corresponding to gesture is hand, then can perform hand and lift steering order, thus control hand lifts.This part citing is only for ease of understanding different 3-D view region, and can have the meaning of identical steering order, it should not become the restriction of scope.
Obviously, for the different 3-D view regions of tripleplane's image, different steering orders can be had.
Optionally, the absolute coordinates of user's gesture can be corresponded to the 3D coordinate carried out in the 3D interface of three-dimensional image projection by the embodiment of the present invention, thus realizes the determination in the 3-D view region corresponding to gesture.Corresponding, Fig. 3 shows a process flow diagram again of the 3-D view control method that the embodiment of the present invention provides, and with reference to Fig. 3, the method can comprise:
Step S300, broadcasting tripleplane image;
Step S310, by described first image acquisition device user images of gestures;
Step S320, identify the steering order that described user's images of gestures is corresponding;
Step S330, determine the absolute coordinates corresponding to gesture in user's images of gestures;
Absolute coordinates can be the real space three-dimensional coordinate of gesture, it is that initial point sets up three-dimensional system of coordinate that the embodiment of the present invention can set object of reference, thus determine the absolute coordinates of gesture in this three-dimensional system of coordinate, setting object of reference can be electronic equipment screen, first image collecting devices etc., are specifically determined by practical situations.
Step S340, absolute coordinates is converted to the 3D coordinate at the 3D interface of carrying out three-dimensional image projection;
Optionally, the embodiment of the present invention can make a reservation for set the three-dimensional system of coordinate that object of reference is set up as initial point, with the corresponding relation at 3D interface carrying out three-dimensional image projection, thus after determining the absolute coordinates that gesture is corresponding, according to this corresponding relation, determine the 3D coordinate at the 3D interface corresponding to this absolute coordinates, and then determine 3-D view region by this 3D coordinate.
Step S350, the 3-D view region determining corresponding to described 3D coordinate;
Step S360, control described 3-D view region according to described steering order and carry out corresponding operating.
In the optional implementation of one, the embodiment of the present invention can be set to initial point and sets up three-dimensional system of coordinate by the first image collector, thus when the gesture operation of user being detected, determine the absolute coordinates of gesture in this three-dimensional system of coordinate, thus this absolute coordinates is corresponded to the 3D coordinate carried out in the 3D interface of three-dimensional image projection, realize the determination in the 3-D view region corresponding to gesture.Corresponding, Fig. 4 shows the another process flow diagram of the 3-D view control method that the embodiment of the present invention provides, and with reference to Fig. 4, the method can comprise:
Step S400, broadcasting tripleplane image;
Step S410, by described first image acquisition device user images of gestures;
Step S420, identify the steering order that described user's images of gestures is corresponding;
Step S430, the 3D coordinate determining in described user's images of gestures corresponding to gesture, a described 3D coordinate is the coordinate in the three-dimensional system of coordinate that is set to initial point with described first image collector and sets up;
For ease of understanding, Fig. 5 shows the schematic diagram of the 3D coordinate in three-dimensional system of coordinate corresponding to gesture, can carry out reference.
Step S440, according to the described three-dimensional system of coordinate preset and the corresponding relation at 3D interface carrying out three-dimensional image projection, determine the 2nd 3D coordinate that a described 3D coordinate is corresponding, described 2nd 3D coordinate is the coordinate in described 3D interface;
Step S450, the 3-D view region determining corresponding to described 2nd 3D coordinate;
Step S460, control described 3-D view region according to described steering order and carry out corresponding operating.
The 3-D view control method that the embodiment of the present invention provides can realize the control to tripleplane's image entirety, also can realize the control in certain the 3-D view region to tripleplane's image; The 3-D view control method that the embodiment of the present invention provides provides and a kind ofly realizes user and the mutual mechanism of 3-D view, achieves the control to shown 3-D view.
Optionally, in order to carry out the gesture identification of user's images of gestures accurately, when the light signal corresponding to the tripleplane's image play electronic equipment by mirror plane processes, the embodiment of the present invention, by after the first image acquisition device to user's images of gestures, can carry out correction process to gathered user's images of gestures; A kind of optional correcting mode is arrange second image collecting device symmetrical with the first image collecting device on an electronic device, thus corrected user's images of gestures by the second image collecting device.Corresponding, Fig. 6 shows a kind of method flow diagram of correcting user images of gestures, and with reference to Fig. 6, the method can comprise:
The image that step S500, acquisition and symmetrically arranged second image collecting device of described first image collecting device gather;
Step S510, the image gathered by described second image collecting device, and user's images of gestures of described first image acquisition device, determine the relative position of the mirror plane that the screen of described electronic equipment processes with the light signal corresponding to the tripleplane's image play described electronic equipment;
Step S520, by described relative position, the user's images of gestures gathered by described first image collecting device to be corrected.
The embodiment of the present invention is after the second image collecting device by arranging with the first image collecting device symmetry direction carries out image rectification, the screen of electronic equipment and the relative position of mirror plane can be obtained, this relative position can be used for correcting gesture identification, obtains steering order more accurately; That is, after obtaining the user's images of gestures corrected, user's gesture that embodiment of the present invention identifiable design is corresponding with the user's images of gestures after correction, determines the steering order corresponding with identified user's gesture.
Original image due to the first image acquisition device is generally warp image (as the first image collector is set to the situation of wide-angle camera), or the image that resolution is larger, in order to reduce the calculated amount in 3-D view region, or reduce the calculated amount of gesture identification.The original image that the embodiment of the present invention can gather the first image collecting device carries out correction process, using the user's images of gestures after correction process as being user's images of gestures that the embodiment of the present invention is gathered by described first image collecting device, can by less calculated amount so that follow-up, determine the steering order that user's gesture is corresponding or 3-D view region accurately.Corresponding, Fig. 7 shows the method flow diagram of the collection user images of gestures that the embodiment of the present invention provides, and with reference to Fig. 7, the method can comprise:
Step S600, obtain the first user images of gestures that described first image collecting device gathers;
Step S610, reduce described first user images of gestures according to predetermined ratio, form second user's images of gestures;
Step S620, employing LDC (Localdefinition-complexity, local sharpness complexity) algorithm processes described second user's images of gestures, cuts out the 3rd user's images of gestures;
Step S630, determine that described 3rd user's images of gestures is the final user's images of gestures gathered.
The final user's images of gestures obtained, can be used for carrying out gesture identification and draws steering order; Also can be used for the determination in 3-D view region, concrete, determine by final user's images of gestures the absolute coordinates that gesture is corresponding, and then determine the 3D coordinate at 3D interface, determine 3-D view region by this 3D coordinate.
For ease of understanding, Fig. 8 shows under the first image collector is set to wide-angle lens, and corresponding image flame detection process schematic diagram, can carry out reference.
The embodiment of the present invention also provides the scheme that the another kind of original image gathered the first image collecting device carries out correction process, corresponding, Fig. 9 shows the other method process flow diagram of the collection user images of gestures that the embodiment of the present invention provides, and with reference to Fig. 9, the method can comprise:
Step S700, obtain the first user images of gestures that described first image collecting device gathers;
Step S710, region by being selected in moving window determination first user images of gestures, form the four user images of gestures corresponding with the described region be selected;
Moving window (MovingWindow) can be predefined, also can be chosen by user and determine.
Step S720, employing LDC algorithm process described 4th user's images of gestures, cut out the 5th user's images of gestures;
Step S730, determine that described 5th user's images of gestures is the final user's images of gestures gathered.
The final user's images of gestures obtained, can be used for carrying out gesture identification and draws steering order; Also can be used for the determination in 3-D view region, concrete, determine by final user's images of gestures the absolute coordinates that gesture is corresponding, and then determine the 3D coordinate at 3D interface, determine 3-D view region by this 3D coordinate.
For ease of understanding, Figure 10 shows another schematic diagram of image flame detection process, can carry out reference.
Optionally, the embodiment of the present invention also according to customer location, can adjust the projection pattern of 3-D view, as adjustment three-dimensional image in the face of user shows etc.; Corresponding, Figure 11 shows the method flow diagram of the projection pattern of the adjustment 3-D view that the embodiment of the present invention provides, and with reference to Figure 11, the method can comprise:
Step S800, by described first image acquisition device user facial image;
Step S810, calculate the user face position relative to the screen of described electronic equipment according to face recognition algorithms;
Step S820, projection pattern by tripleplane's image described in described position control.
The 3-D view control method that the embodiment of the present invention provides provides and a kind ofly realizes user and the mutual mechanism of 3-D view, achieves the control to shown 3-D view.
Be introduced the 3-D view control device that the embodiment of the present invention provides below, 3-D view control device described below can mutual corresponding reference with above-described 3-D view control method.
The structured flowchart of the 3-D view control device that Figure 12 provides for the embodiment of the present invention, this device can be applicable to the electronic equipment playing tripleplane's image, this electronic equipment can be provided with at least one image collecting device, the image collecting device installed comprises the first image collecting device, with reference to Figure 12, this device can comprise:
Playing module 100, for playing tripleplane's image;
Acquisition module 200, for by described first image acquisition device user images of gestures;
Identification module 300, for identifying the steering order that described user's images of gestures is corresponding;
Control module 400, carries out corresponding operating for controlling described tripleplane image according to described steering order.
Optionally, the embodiment of the present invention can carry out overall control to tripleplane's image, also can control certain 3-D view region of tripleplane's image.When controlling 3-D view region, Figure 13 shows a kind of alternate configurations of the control module 400 that the embodiment of the present invention provides, and with reference to Figure 13, control module 400 can comprise:
Area determination unit 410, for determining the 3-D view region in described user's images of gestures corresponding to gesture;
Operation control unit 411, carries out corresponding operating for controlling described 3-D view region according to described steering order.
Optionally, Figure 14 shows a kind of alternate configurations of the area determination unit 410 that the embodiment of the present invention provides, and with reference to Figure 14, area determination unit 410 can comprise:
One 3D coordinate determination subelement 4101, for determining the 3D coordinate in described user's images of gestures corresponding to gesture, a described 3D coordinate is the coordinate in the three-dimensional system of coordinate that is set to initial point with described first image collector and sets up;
2nd 3D coordinate determination subelement 4102, for according to the described three-dimensional system of coordinate preset and the corresponding relation at 3D interface carrying out three-dimensional image projection, determine the 2nd 3D coordinate that a described 3D coordinate is corresponding, described 2nd 3D coordinate is the coordinate in described 3D interface;
Area results determination subelement 4103, for determining the 3-D view region corresponding to described 2nd 3D coordinate, the 3-D view region corresponding to described gesture.
Optionally, in order to carry out the gesture identification of user's images of gestures accurately, the embodiment of the present invention, by after the first image acquisition device to user's images of gestures, can carry out correction process to gathered user's images of gestures; Corresponding, Figure 15 shows a kind of alternate configurations of the identification module 300 that the embodiment of the present invention provides, and with reference to Figure 15, identification module 300 can comprise:
Image acquisition unit 310, for obtaining the image gathered with symmetrically arranged second image collecting device of described first image collecting device;
Relative position determining unit 311, for the image gathered by described second image collecting device, and user's images of gestures of described first image acquisition device, determine the relative position of the mirror plane that the screen of described electronic equipment processes with the light signal corresponding to the tripleplane's image play described electronic equipment;
Correcting unit 312, for correcting the user's images of gestures gathered by described first image collecting device by described relative position;
Recognition unit 313, for identify with correct after the corresponding user's gesture of user's images of gestures, determine the steering order corresponding with identified user's gesture.
Optionally, Figure 16 shows the another kind of alternate configurations of the identification module 300 that the embodiment of the present invention provides, and with reference to Figure 16, identification module 300 can comprise:
Gesture identification unit 320, for identifying user's gesture that described user's images of gestures is corresponding;
Instruction-determining unit 321, for the corresponding relation according to predetermined user's gesture and steering order, determines the steering order corresponding with identified user's gesture.
Optionally, the original image that the embodiment of the present invention can gather the first image collecting device carries out correction process, using the user's images of gestures after correction process as being that the embodiment of the present invention is by described first image acquisition device user images of gestures, can by less calculated amount so that follow-up, determine the steering order that user's gesture is corresponding or 3-D view region accurately.Corresponding, Figure 17 shows a kind of alternate configurations of the acquisition module 100 that the embodiment of the present invention provides, and with reference to Figure 17, acquisition module 100 can comprise:
First acquiring unit 110, for obtaining the first user images of gestures that described first image collecting device gathers;
Reducing unit 111, for reducing described first user images of gestures according to predetermined ratio, forms second user's images of gestures;
First processing unit 112, for adopting LDC algorithm to process described second user's images of gestures, cuts out the 3rd user's images of gestures;
First final determining unit 113, for determining that described 3rd user's images of gestures is the final user's images of gestures gathered.
Optionally, Figure 18 shows the another kind of alternate configurations of the acquisition module 100 that the embodiment of the present invention provides, and with reference to Figure 18, acquisition module 100 can comprise:
Second acquisition unit 120, for obtaining the first user images of gestures that described first image collecting device gathers;
Choosing unit 121, for the region by being selected in moving window determination first user images of gestures, forming the four user images of gestures corresponding with the described region be selected;
Second processing unit 122, for adopting LDC algorithm to process described 4th user's images of gestures, cuts out the 5th user's images of gestures;
Second final determining unit 123, for determining that described 5th user's images of gestures is the final user's images of gestures gathered.
Optionally, Figure 19 shows another structured flowchart of the 3-D view control device that the embodiment of the present invention provides, and shown in Figure 12 and Figure 19, this device can also comprise:
Man face image acquiring module 500, for passing through described first image acquisition device user facial image;
Position computation module 600, for calculating the position of user's face relative to the screen of described electronic equipment according to face recognition algorithms;
Projection control module 700, for the projection pattern by tripleplane's image described in described position control.
The 3-D view control device that the embodiment of the present invention provides provides and a kind ofly realizes user and the mutual mechanism of 3-D view, achieves the control to shown 3-D view.
The embodiment of the present invention also provides a kind of electronic equipment, comprises 3-D view control device described above, about the concrete introduction of 3-D view control device, see the description of corresponding part above, can repeat no more herein.
In this instructions, each embodiment adopts the mode of going forward one by one to describe, and what each embodiment stressed is the difference with other embodiments, between each embodiment identical similar portion mutually see.For device disclosed in embodiment, because it corresponds to the method disclosed in Example, so description is fairly simple, relevant part illustrates see method part.
Professional can also recognize further, in conjunction with unit and the algorithm steps of each example of embodiment disclosed herein description, can realize with electronic hardware, computer software or the combination of the two, in order to the interchangeability of hardware and software is clearly described, generally describe composition and the step of each example in the above description according to function.These functions perform with hardware or software mode actually, depend on application-specific and the design constraint of technical scheme.Professional and technical personnel can use distinct methods to realize described function to each specifically should being used for, but this realization should not thought and exceeds scope of the present invention.
The software module that the method described in conjunction with embodiment disclosed herein or the step of algorithm can directly use hardware, processor to perform, or the combination of the two is implemented.Software module can be placed in the storage medium of other form any known in random access memory (RAM), internal memory, ROM (read-only memory) (ROM), electrically programmable ROM, electrically erasable ROM, register, hard disk, moveable magnetic disc, CD-ROM or technical field.
To the above-mentioned explanation of the disclosed embodiments, professional and technical personnel in the field are realized or uses the present invention.To be apparent for those skilled in the art to the multiple amendment of these embodiments, General Principle as defined herein can without departing from the spirit or scope of the present invention, realize in other embodiments.Therefore, the present invention can not be restricted to these embodiments shown in this article, but will meet the widest scope consistent with principle disclosed herein and features of novelty.

Claims (17)

1. a 3-D view control method, is characterized in that, is applied to the electronic equipment being provided with the first image collecting device, and described electronic equipment is for playing tripleplane's image, and described method comprises:
Play tripleplane's image;
By described first image acquisition device user images of gestures;
Identify the steering order that described user's images of gestures is corresponding;
Control described tripleplane image according to described steering order and carry out corresponding operating.
2. 3-D view control method according to claim 1, is characterized in that, describedly controls described tripleplane image according to described steering order and carries out corresponding operating and comprise:
Determine the 3-D view region corresponding to gesture in described user's images of gestures;
Control described 3-D view region according to described steering order and carry out corresponding operating.
3. 3-D view control method according to claim 2, is characterized in that, describedly determines that the 3-D view region in described user's images of gestures corresponding to gesture comprises:
Determine the 3D coordinate in described user's images of gestures corresponding to gesture, a described 3D coordinate is the coordinate in the three-dimensional system of coordinate that is set to initial point with described first image collector and sets up;
According to the described three-dimensional system of coordinate preset and the corresponding relation at 3D interface carrying out three-dimensional image projection, determine the 2nd 3D coordinate that a described 3D coordinate is corresponding, described 2nd 3D coordinate is the coordinate in described 3D interface;
Determine the 3-D view region corresponding to described 2nd 3D coordinate, the 3-D view region corresponding to described gesture.
4. 3-D view control method according to claim 1, is characterized in that, steering order corresponding to described identification described user images of gestures comprises:
Obtain the image gathered with symmetrically arranged second image collecting device of described first image collecting device;
By the image that described second image collecting device gathers, and user's images of gestures of described first image acquisition device, determine the relative position of the mirror plane that the screen of described electronic equipment processes with the light signal corresponding to the tripleplane's image play described electronic equipment;
By described relative position, the user's images of gestures gathered by described first image collecting device is corrected;
Identify the user gesture corresponding with the user's images of gestures after correction, determine the steering order corresponding with identified user's gesture.
5. 3-D view control method according to claim 1, is characterized in that, steering order corresponding to described identification described user images of gestures comprises:
Identify user's gesture that described user's images of gestures is corresponding;
According to the corresponding relation of predetermined user's gesture and steering order, determine the steering order corresponding with identified user's gesture.
6. the 3-D view control method according to any one of claim 1-5, is characterized in that, is describedly comprised by described first image acquisition device user images of gestures:
Obtain the first user images of gestures that described first image collecting device gathers;
Reduce described first user images of gestures according to predetermined ratio, form second user's images of gestures;
Adopt local sharpness complexity LDC algorithm to process described second user's images of gestures, cut out the 3rd user's images of gestures;
Determine that described 3rd user's images of gestures is the final user's images of gestures gathered.
7. the 3-D view control method according to any one of claim 1-5, is characterized in that, is describedly comprised by described first image acquisition device user images of gestures:
Obtain the first user images of gestures that described first image collecting device gathers;
By the region be selected in moving window determination first user images of gestures, form the four user images of gestures corresponding with the described region be selected;
Adopt local sharpness complexity LDC algorithm to process described 4th user's images of gestures, cut out the 5th user's images of gestures;
Determine that described 5th user's images of gestures is the final user's images of gestures gathered.
8. 3-D view control method according to claim 1, is characterized in that, also comprise:
By described first image acquisition device user facial image;
The position of user's face relative to the screen of described electronic equipment is calculated according to face recognition algorithms;
By the projection pattern of 3-D view described in described position control.
9. a 3-D view control device, is characterized in that, is applied to the electronic equipment being provided with the first image collecting device, and described electronic equipment is for playing tripleplane's image, and described device comprises:
Playing module, for playing tripleplane's image;
Acquisition module, for by described first image acquisition device user images of gestures;
Identification module, for identifying the steering order that described user's images of gestures is corresponding;
Control module, carries out corresponding operating for controlling described tripleplane image according to described steering order.
10. 3-D view control device according to claim 9, is characterized in that, described control module comprises:
Area determination unit, for determining the 3-D view region in described user's images of gestures corresponding to gesture;
Operation control unit, carries out corresponding operating for controlling described 3-D view region according to described steering order.
11. 3-D view control device according to claim 10, it is characterized in that, described area determination unit comprises:
One 3D coordinate determination subelement, for determining the 3D coordinate in described user's images of gestures corresponding to gesture, a described 3D coordinate is the coordinate in the three-dimensional system of coordinate that is set to initial point with described first image collector and sets up;
2nd 3D coordinate determination subelement, for according to the described three-dimensional system of coordinate preset and the corresponding relation at 3D interface carrying out three-dimensional image projection, determine the 2nd 3D coordinate that a described 3D coordinate is corresponding, described 2nd 3D coordinate is the coordinate in described 3D interface;
Area results determination subelement, for determining the 3-D view region corresponding to described 2nd 3D coordinate, the 3-D view region corresponding to described gesture.
12. 3-D view control device according to claim 9, it is characterized in that, described identification module comprises:
Image acquisition unit, for obtaining the image gathered with symmetrically arranged second image collecting device of described first image collecting device;
Relative position determining unit, for the image gathered by described second image collecting device, and user's images of gestures of described first image acquisition device, determine the relative position of the mirror plane that the screen of described electronic equipment processes with the light signal corresponding to the tripleplane's image play described electronic equipment;
Correcting unit, for correcting the user's images of gestures gathered by described first image collecting device by described relative position;
Recognition unit, for identify with correct after the corresponding user's gesture of user's images of gestures, determine the steering order corresponding with identified user's gesture.
13. 3-D view control device according to claim 9, it is characterized in that, described identification module comprises:
Gesture identification unit, for identifying user's gesture that described user's images of gestures is corresponding;
Instruction-determining unit, for the corresponding relation according to predetermined user's gesture and steering order, determines the steering order corresponding with identified user's gesture.
14. 3-D view control device according to any one of claim 9-13, it is characterized in that, described acquisition module comprises:
First acquiring unit, for obtaining the first user images of gestures that described first image collecting device gathers;
Reducing unit, for reducing described first user images of gestures according to predetermined ratio, forms second user's images of gestures;
First processing unit, for adopting local sharpness complexity LDC algorithm to process described second user's images of gestures, cuts out the 3rd user's images of gestures;
First final determining unit, for determining that described 3rd user's images of gestures is the final user's images of gestures gathered.
15. 3-D view control device according to any one of claim 9-13, it is characterized in that, described acquisition module comprises:
Second acquisition unit, for obtaining the first user images of gestures that described first image collecting device gathers;
Choosing unit, for the region by being selected in moving window determination first user images of gestures, forming the four user images of gestures corresponding with the described region be selected;
Second processing unit, for adopting local sharpness complexity LDC algorithm to process described 4th user's images of gestures, cuts out the 5th user's images of gestures;
Second final determining unit, for determining that described 5th user's images of gestures is the final user's images of gestures gathered.
16. 3-D view control device according to claim 9, is characterized in that, also comprise:
Man face image acquiring module, for passing through described first image acquisition device user facial image;
Position computation module, for calculating the position of user's face relative to the screen of described electronic equipment according to face recognition algorithms;
Projection control module, for the projection pattern by tripleplane's image described in described position control.
17. 1 kinds of electronic equipments, is characterized in that, comprise the 3-D view control device described in any one of claim 9-16.
CN201410301004.3A 2014-06-27 A kind of 3-D view control method, device and electronic equipment Active CN105334951B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410301004.3A CN105334951B (en) 2014-06-27 A kind of 3-D view control method, device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410301004.3A CN105334951B (en) 2014-06-27 A kind of 3-D view control method, device and electronic equipment

Publications (2)

Publication Number Publication Date
CN105334951A true CN105334951A (en) 2016-02-17
CN105334951B CN105334951B (en) 2018-08-31

Family

ID=

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108732969A (en) * 2018-05-21 2018-11-02 哈尔滨拓博科技有限公司 A kind of SCM Based automobile gesture control device and its control method
CN109857260A (en) * 2019-02-27 2019-06-07 百度在线网络技术(北京)有限公司 Control method, the device and system of three-dimensional interactive image
CN110235440A (en) * 2017-01-31 2019-09-13 株式会社木村技研 Optical projection system and projecting method
CN110989835A (en) * 2017-09-11 2020-04-10 大连海事大学 Working method of holographic projection device based on gesture recognition

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102508546A (en) * 2011-10-31 2012-06-20 冠捷显示科技(厦门)有限公司 Three-dimensional (3D) virtual projection and virtual touch user interface and achieving method
CN102736728A (en) * 2011-04-11 2012-10-17 宏碁股份有限公司 Control method and system for three-dimensional virtual object and processing device for three-dimensional virtual object
CN103176605A (en) * 2013-03-27 2013-06-26 刘仁俊 Control device of gesture recognition and control method of gesture recognition
CN103793061A (en) * 2014-03-03 2014-05-14 联想(北京)有限公司 Control method and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102736728A (en) * 2011-04-11 2012-10-17 宏碁股份有限公司 Control method and system for three-dimensional virtual object and processing device for three-dimensional virtual object
CN102508546A (en) * 2011-10-31 2012-06-20 冠捷显示科技(厦门)有限公司 Three-dimensional (3D) virtual projection and virtual touch user interface and achieving method
CN103176605A (en) * 2013-03-27 2013-06-26 刘仁俊 Control device of gesture recognition and control method of gesture recognition
CN103793061A (en) * 2014-03-03 2014-05-14 联想(北京)有限公司 Control method and electronic equipment

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110235440A (en) * 2017-01-31 2019-09-13 株式会社木村技研 Optical projection system and projecting method
CN110235440B (en) * 2017-01-31 2022-08-23 株式会社木村技研 Projection system and projection method
CN110989835A (en) * 2017-09-11 2020-04-10 大连海事大学 Working method of holographic projection device based on gesture recognition
CN110989835B (en) * 2017-09-11 2023-04-28 大连海事大学 Working method of holographic projection device based on gesture recognition
CN108732969A (en) * 2018-05-21 2018-11-02 哈尔滨拓博科技有限公司 A kind of SCM Based automobile gesture control device and its control method
CN108732969B (en) * 2018-05-21 2019-04-05 哈尔滨拓博科技有限公司 A kind of SCM Based automobile gesture control device and its control method
CN109857260A (en) * 2019-02-27 2019-06-07 百度在线网络技术(北京)有限公司 Control method, the device and system of three-dimensional interactive image

Similar Documents

Publication Publication Date Title
EP3864491B1 (en) Method for hmd camera calibration using synchronised image rendered on external display
CN103929603B (en) Image projecting equipment, image projection system and control method
US20170006375A1 (en) Directivity control apparatus, directivity control method, storage medium and directivity control system
JP6089722B2 (en) Image processing apparatus, image processing method, and image processing program
CN107798715B (en) Alignment adsorption method and device for three-dimensional graph, computer equipment and storage medium
US20110227827A1 (en) Interactive Display System
WO2019128109A1 (en) Face tracking based dynamic projection method, device and electronic equipment
JP6723814B2 (en) Information processing apparatus, control method thereof, program, and storage medium
US9703371B1 (en) Obtaining input from a virtual user interface
US9336602B1 (en) Estimating features of occluded objects
US20160334884A1 (en) Remote Sensitivity Adjustment in an Interactive Display System
JP6028589B2 (en) Input program, input device, and input method
WO2021035891A1 (en) Augmented reality technology-based projection method and projection device
KR101330531B1 (en) Method of virtual touch using 3D camera and apparatus thereof
JP2007207056A (en) Information input system
KR20160026482A (en) Movile device and projecing method for the same
JP2013174738A (en) Display device, method of controlling display device, control program, and computer readable recording medium recording the control program
WO2018014517A1 (en) Information processing method, device and storage medium
US11443719B2 (en) Information processing apparatus and information processing method
CN111176425A (en) Multi-screen operation method and electronic system using same
CN105334951A (en) Three-dimensional image control method and apparatus and electronic device
TWI489352B (en) Optical touch positioning method, system and optical touch positioner
JP6452658B2 (en) Information processing apparatus, control method thereof, and program
CN105204725A (en) Method and device for controlling three-dimensional image, electronic device and three-dimensional projecting device
TW202018486A (en) Operation method for multi-monitor and electronic system using the same

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant