CN105302295B - A kind of virtual reality interactive device with 3D camera assemblies - Google Patents

A kind of virtual reality interactive device with 3D camera assemblies Download PDF

Info

Publication number
CN105302295B
CN105302295B CN201510563539.2A CN201510563539A CN105302295B CN 105302295 B CN105302295 B CN 105302295B CN 201510563539 A CN201510563539 A CN 201510563539A CN 105302295 B CN105302295 B CN 105302295B
Authority
CN
China
Prior art keywords
hand
user
sequence
characteristic point
virtual reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510563539.2A
Other languages
Chinese (zh)
Other versions
CN105302295A (en
Inventor
杨晓光
李建英
朱磊
韩琦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Yishe Technology Co Ltd
Original Assignee
Harbin Yishe Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Yishe Technology Co Ltd filed Critical Harbin Yishe Technology Co Ltd
Priority to CN201510563539.2A priority Critical patent/CN105302295B/en
Publication of CN105302295A publication Critical patent/CN105302295A/en
Application granted granted Critical
Publication of CN105302295B publication Critical patent/CN105302295B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The present invention provides a kind of virtual reality interactive device with 3D camera assemblies, which includes 3D camera assemblies, helmet-type virtual reality display, signal processing component and mobile equipment interface, 3D camera assemblies and is connected to signal processing component;3D camera assemblies are used to capture the testing image sequence of user's hand containing depth information, and testing image sequence is sent to signal processing component, signal processing component is used to obtain the gesture of user based on testing image sequence, and corresponding operational order is determined according to the gesture, to perform the operational order to the mobile equipment for being connected to mobile equipment interface, helmet-type virtual reality display is used to receive the screen display signal of mobile equipment by mobile equipment interface, and the screen for moving equipment is presented in predetermined display area with virtual reality display mode.The above-mentioned technology of the present invention can carry out human-computer interaction using Gesture Recognition, enrich input mode, and operate relatively simple.

Description

A kind of virtual reality interactive device with 3D camera assemblies
Technical field
The present invention relates to human-computer interaction technology more particularly to a kind of virtual reality interactive devices with 3D camera assemblies.
Background technology
With mobile computing device from laptop to mobile phone, the evolution of tablet computer, the manipulation of mobile computing device Mode also experienced from keyboard, mouse to mobile phone key, handwriting pad, then the evolution to touch screen, dummy keyboard, it can be seen that The control mode of mobile equipment is towards more and more intuitive, simplicity, and meet the direction the being accustomed to naturally evolution of people.
The current control mode based on touch screen widely used on a mobile computing device, is technically transparent by one piece Touch sensitive display and display screen fit together, touch sensitive display is substantially a positioning device, can capture screen On touch action and obtain its position, in combination with timeline information, by action recognition its touch for point, it is long touch, slide etc. it is dynamic One of make.And then position and action message are passed into mobile computing device as instruction, mobile computing device is based on the instruction Make corresponding operation reaction.Since touch sensitive display and display screen are superimposed, bring user and " put i.e. Thought " use feeling, need the input mode by cursor feedback position, screen compared to positioning devices such as mouse, Trackpads Touch control manner brings better usage experience.
Screen touch control manner adds the mode of mouse compared to keyboard, is more in line with the intuitive reaction of people, is easier to learn, still Screen touch control manner only captures the action of human finger after all, in the field of some needs more multi-user's ontology information input It closes, such as motor play, simulated training, complicated manipulation, remote control etc., screen touch control manner just shows that it captures human body The excessively single limitation of information.
At present, existing virtual reality interaction technique be typically using mouse, button etc. routine input mode come with Equipment interacts so that input mode is excessively limited, and thus leading to user, operation is more when carrying out function selection or performing Cumbersome, user experience is poor.
Invention content
The brief overview about the present invention is given below, in order to provide about the basic of certain aspects of the invention Understand.It should be appreciated that this general introduction is not the exhaustive general introduction about the present invention.It is not intended to determine the pass of the present invention Key or pith, nor is it intended to limit the scope of the present invention.Its purpose only provides certain concepts in simplified form, In this, as the preamble in greater detail discussed later.
In consideration of it, the present invention provides a kind of virtual reality interactive device with 3D camera assemblies, it is existing at least to solve The input mode of some virtual reality interaction techniques is limited and user operated when carrying out function selection or performing it is relatively complicated The problem of.
According to an aspect of the invention, there is provided a kind of virtual reality interactive device with 3D camera assemblies, the void Intend real interactive device to connect including 3D camera assemblies, helmet-type virtual reality display, signal processing component and mobile equipment Mouthful, the 3D camera assemblies are connected to the signal processing component;The signal processing component is connected to the mobile equipment and connects Mouthful, the mobile equipment interface is connected to the helmet-type virtual reality display;The 3D camera assemblies contain for capturing The testing image sequence of user's hand of depth information, and the testing image sequence is sent to the signal processing group Part, the signal processing component are used to obtain the gesture of the user, and according to the hand based on the testing image sequence Gesture determines corresponding operational order, described to perform the operational order to the mobile equipment for being connected to the mobile equipment interface Helmet-type virtual reality display is used to receive the screen display signal of the mobile equipment by the mobile equipment interface, with The screen of the mobile equipment is presented in predetermined display area with virtual reality display mode.
Further, the helmet-type virtual reality display includes:Wearing portion, the wearing portion are wearable in user Head;Imaging section is acquired, the acquisition imaging section is set in the wearing portion, and be connected to the mobile equipment interface to adopt Collect the screen display signal of the mobile equipment, the screen is presented in the predetermined display with virtual reality display mode Region.
Further, it is described acquisition imaging section include display screen and two groups of lens groups, the display screen be transparent material, institute Two groups of lens sets are stated to be configured to:When the virtual reality interactive device is worn on head by user, two groups of eyeglasses Group is located at respectively immediately ahead of the corresponding sight of user.
Further, the signal processing component includes:Contour detecting unit, for according to image depth information and image Colouring information detects the hand profile of the user in every frame image of the testing image sequence;Characteristic point sequence Determination unit, for being directed to every hand of the user, using preset hand structure template, in the testing image sequence Every frame image in determine the characteristic point sequence to be measured of this hand;Action recognition unit, for being directed to every of the user Hand determines the matching sequence of the characteristic point sequence to be measured of this hand, with according to the matching in multiple default characteristic point sequences Sequence determines denomination of dive and the position of this hand;Gesture identification unit, for the selection in default gesture table and the use The gesture that the denomination of dive of person's both hands and position match, as having identified gesture;Instruction-determining unit, for according to default behaviour Make instruction catalogue, determine to have identified the corresponding operational order of gesture with described;Execution unit, for pair with determining operational order phase The equipment of pass carries out operation corresponding with the operational order.
Further, the characteristic point sequence determination unit includes:Template storing sub-units, for storing preset hand Stay in place form;Template matches subelement, for being directed to every hand of the user, using preset hand structure template, The predetermined number characteristic point of this hand is determined in the hand profile of every frame image of the testing image sequence;Sequence generation Unit, it is corresponding in each frame image of the testing image sequence using this hand for being directed to every hand of the user Predetermined number characteristic point, obtain the characteristic point sequence to be measured of this hand.
Further, the template matches subelement includes:Setting base determining module is used to treat mapping for described As every frame image of sequence, the profile curvature of a curve in the image finds the finger tip point in the contour line and refers to root joint Point, using the finger tip point as setting base;Benchmark determining module is scaled, is used for for the setting base determining module Treated per frame image, based on the setting base found in the frame image, each finger root artis singly referred to of matching, Obtain the benchmark that each length singly referred to is used as scaling;Scaling and deformation module are used for true for the scaling benchmark Treated for cover half block per frame image, each refers to based on the finger tip point and the position for referring to root artis that have found and singly Length the corresponding hand structure template is zoomed in and out and deformation, each articulations digitorum manus that every hand is obtained by matching are special Sign point and wrist midpoint characteristic point;Wherein, the hand structure template of the template storing sub-units storage includes left hand knot Structure template and right hand configurations template, the left-handed configuration template and right hand configurations template respectively include:The fingertip characteristic of each finger Point, respectively each articulations digitorum manus characteristic point, the topological relation between finger root joint characteristic point, wrist midpoint characteristic point and each characteristic point.
Further, the action recognition unit includes:Divide subelement, for being directed to the characteristic point sequence to be measured of every hand Row, multiple subsequences are divided into according to predetermined time window by the characteristic point sequence to be measured, and it is corresponding flat to obtain each subsequence Equal position;Match sequence determination subelement, for being directed to the corresponding each subsequence of every hand, by the subsequence with it is the multiple Each in default characteristic point sequence is matched respectively, selection and the subsequence in the multiple default characteristic point sequence Matching degree higher than preset matching threshold and maximum default characteristic point sequence, the matching sequence as the subsequence; Subelement is associated with, for by the corresponding mean place of each subsequence denomination of dive phase corresponding with the matching sequence of the subsequence Association;Denomination of dive determination subelement, for being directed to every hand, using the matching sequence of the corresponding each subsequence of this hand as this Corresponding multiple matching sequences of hand, and using multiple matching corresponding denomination of dive of sequence as the multiple dynamic of this hand Make title.
Further, the gesture identification unit includes:Gesture table storing sub-units, for storing following map listing As the default gesture table:The left end each mapped in the map listing is set title pair and each denomination of dive To position;The right end each mapped in the map listing is a gesture;Gesture table coupling subelement, for will be described pre- If the left end each mapped in gesture table is matched with the denomination of dive of user's both hands and position, wherein, action The matching of title performs stringent matching, and position is then that relative position is calculated by the respective mean place of user's both hands Information, and then the similarity between the relative position information and the position for mapping left end is calculated to realize.
Further, the signal processing component is additionally operable to:Position based on described every hand of user makes described in obtaining The simulation figure of user's hand, the simulation to be graphically displayed to the screen of the mobile equipment by the mobile equipment interface On.
Further, the signal processing component is used for:According to the corresponding characteristic point sequence to be measured of described every hand of user Row, obtain the outer profile figure of this hand, the simulation figure as this hand by extension after connecting bone;By making to described The relative position of user's both hands carries out translation calibration and proportional zoom, determines every hand of the user in the screen Display location;Simulation figure and display location based on described every hand of user are shown in the screen of the mobile equipment The simulation figure of user's hand.
The above-mentioned virtual reality interactive device with 3D camera assemblies according to embodiments of the present invention, utilizes 3D camera assemblies The testing image sequence of user's hand is captured, to identify the gesture of user, and then is carried out according to gesture has been identified to moving The manipulation of dynamic equipment.The virtual reality interactive device acquires the screen display signal of mobile equipment by mobile equipment interface, from And its screen is presented in predetermined display area by virtual reality display mode.It is set when user wears virtual reality interaction When standby, the virtual image of mobile device screen can be seen in the predetermined display area within its visual field, and know by gesture Human-computer interaction is carried out between mobile equipment otherwise, manipulates the movement equipment.Unlike the prior art, it is of the invention virtual Real interactive device, can be with other than it can carry out human-computer interaction using traditional input mode such as existing mouse, button Human-computer interaction is carried out using above-mentioned Gesture Recognition, enriches input mode type, and operate relatively simple.
In addition, the present invention virtual reality interactive device during gesture identification is carried out, using act stencil matching It is realized with action pair with the matched mode of gesture, the precision height of identifying processing, speed is fast.
For the above-mentioned virtual reality interactive device of the present invention using Hierarchical Design algorithm, algorithm complexity is low, is easy to implement.
In addition, the above-mentioned virtual reality interactive device of the application present invention, (such as change, increase or decrease when needing to change When) to the definition of action and/or gesture when, can be only by adjusting template (that is, corresponding by the default characteristic point sequence of modification Denomination of dive changes the definition of action, by increasing or decreasing default characteristic point sequence and its respective action title increases, subtracts Action) and default gesture table (that is, changing determining for gesture by the corresponding multiple actions of gesture in the default gesture table of modification Justice increases by increasing or decreasing gesture in default gesture table and its respective action, subtracts gesture), without change algorithm or Person's re -training grader substantially increases the adaptability of algorithm.
In addition, the above-mentioned virtual reality interactive device of the present invention is real-time, the field of real-time interaction demand can be suitble to It closes.
By the way that below in conjunction with detailed description of the attached drawing to highly preferred embodiment of the present invention, these and other of the invention is excellent Point will be apparent from.
Description of the drawings
The present invention can be by reference to being better understood, wherein in institute below in association with the description given by attached drawing Have and the same or similar reference numeral has been used in attached drawing to represent same or similar component.The attached drawing is together with following The part for including in the present specification and being formed this specification together is described in detail, and for this is further illustrated The preferred embodiment and explanation the principle of the present invention and advantage of invention.In the accompanying drawings:
Figure 1A is one exemplary three of a kind of virtual reality interactive device with 3D camera assemblies for showing the present invention Structure diagram is tieed up, Figure 1B -1F are the front view of virtual reality interactive device shown in figure 1A, vertical view, bottom view, a left side respectively View and right view;
Fig. 2A and Fig. 2 B are the schematic diagrames for showing virtual reality interactive device shown in figure 1A being worn on user head;
Fig. 3 is an exemplary structure diagram for showing signal processing component 130;
Fig. 4 is an exemplary structure diagram for showing the characteristic point sequence determination unit 320 in Fig. 3;
Fig. 5 is an exemplary structure diagram for showing the template matches subelement 420 in Fig. 4;
Fig. 6 is an exemplary structure diagram for showing the action recognition unit 330 in Fig. 3;
Fig. 7 is an exemplary structure diagram for showing the gesture identification unit 340 in Fig. 3.
It will be appreciated by those skilled in the art that element in attached drawing is just for the sake of showing for the sake of simple and clear, And be not necessarily drawn to scale.For example, the size of certain elements may be exaggerated relative to other elements in attached drawing, with Just the understanding to the embodiment of the present invention is helped to improve.
Specific embodiment
The exemplary embodiment of the present invention is described hereinafter in connection with attached drawing.For clarity and conciseness, All features of actual implementation mode are not described in the description.It should be understood, however, that developing any this actual implementation It must be made during example much specific to the decision of embodiment, to realize the objectives of developer, for example, symbol Conjunction and system and those relevant restrictive conditions of business, and these restrictive conditions may have with the difference of embodiment Changed.In addition, it will also be appreciated that although development is likely to be extremely complex and time-consuming, to having benefited from the disclosure For those skilled in the art of content, this development is only routine task.
Herein, it is also necessary to which explanation is a bit, in order to avoid because having obscured the present invention during unnecessary details, in the accompanying drawings The apparatus structure closely related with scheme according to the present invention and/or processing step are illustrate only, and is omitted and the present invention The little other details of relationship.
The embodiment provides a kind of virtual reality interactive device with 3D camera assemblies, which hands over Mutual equipment includes 3D camera assemblies, helmet-type virtual reality display, signal processing component and mobile equipment interface, the 3D Camera assembly is connected to the signal processing component;The signal processing component is connected to the mobile equipment interface, the shifting Dynamic equipment interface is connected to the helmet-type virtual reality display;The 3D camera assemblies contain depth information for capturing The testing image sequence of user's hand, and the testing image sequence is sent to the signal processing component, the signal Processing component is used to obtain the gesture of the user based on the testing image sequence, and is determined according to the gesture corresponding Operational order, to perform the operational order to the mobile equipment for being connected to the mobile equipment interface, the helmet-type is virtually existing Real display is used to capture the screen of the mobile equipment, and the virtual image of the screen is presented in predetermined imaging region.
Figure 1A-Fig. 1 F show an exemplary knot of the virtual reality interactive device with 3D camera assemblies of the present invention Structure.As shown in Figure 1A-Fig. 1 F, the virtual reality interactive device 100 with 3D camera assemblies includes 3D camera assemblies 110, the helmet Formula virtual reality display 120 (such as including wearing portion 210 described below and acquisition imaging section 220), signal processing group Part 130 and mobile equipment interface 140.Wherein, 3D camera assemblies 110 are connected and (are connected herein for electric signal) to signal processing group Part 130, signal processing component 130 is connected and (is connected herein for electric signal) to mobile equipment interface 140, and moves equipment interface 140 connect (being that electric signal connects herein) to helmet-type virtual reality display 120.It should be noted that in this example, letter Number processing component 130 is provided in inside helmet-type virtual reality display 120.It will scheme in addition, Fig. 2A and Fig. 2 B are shown Virtual reality interactive device shown in 1A is worn on the schematic diagram on user head.
3D camera assemblies 110 treat this for capturing the testing image sequence of user's hand containing depth information Altimetric image sequence is sent to signal processing component 130.Wherein, 3D camera assemblies 110 can for example include two 3D cameras.3D Camera is the depth camera for including visible light image sensor and infrared image sensor, it is seen that optical image sensor is used for Obtain Detection Method in Optical Image Sequences { Ii C(x, y) }, and the depth camera of infrared image sensor is then used to obtain infrared image sequence Arrange { II i(x,y)}。
According to a kind of realization method, signal processing component 130 is arranged on inside helmet-type virtual reality display 120, can 3D camera assemblies 110 are arranged in a connector, which connect, simultaneously with helmet-type virtual reality display 120 And (with reference to figure 2A and Fig. 2 B) can be rotated around helmet-type virtual reality display 120.User is above-mentioned by rotating as a result, Connector enables to the direction that the 3D being disposed thereon camera assemblies 110 are faced (that is, the corresponding optical axis of 3D cameras Direction) towards the gesture of user.After the direction for adjusting above-mentioned connector, user only need to do hand in comfortable position Gesture, and the direction of connector adaptation can be adjusted respectively according to the respective comfortable position of different occasions.
According to a kind of realization method, 3D camera assemblies 110 can be used for:By capturing the user in predetermined imaging region The image of hand, (such as can utilize the visible light image sensor and infrared image sensor in depth camera) obtains can See light pattern sequenceAnd infrared image sequenceIt is sat for the i-th frame of Detection Method in Optical Image Sequences image The pixel value at (x, y) is marked, andFor the pixel value at infrared image sequence the i-th frame image coordinate (x, y), according to as follows Formula can obtain extracting the image sequence of user's both hands information:
Wherein, α, β, λ are parameter preset threshold value, these parameter preset threshold values can be set based on experience value, can also Determine that (such as the collected sample image of depth camera by actually using specific model is trained by the method for experiment Obtain), which is not described herein again.The image sequence of user's both hands containing depth information for acquisition, as upper State testing image sequence.In addition, i=1,2 ..., M, M is number of image frames included in testing image sequence.
It should be noted that according to used in user's gesture hand quantity difference (only single or double), making a reservation for into As the image captured in region may be the image that includes user's both hands, it is also possible to only include the figure of single hand of user Picture.In addition, the testing image sequence obtained can obtain in a period of time, which can be previously according to experience Value setting, may be, for example, 10 seconds.
Signal processing component 130 is used to obtain the gesture of user, and according to the hand based on above-mentioned testing image sequence Gesture determines corresponding operational order, to perform the operational order to the mobile equipment for being connected to mobile equipment interface 140.Wherein, The mobile equipment for being connected to mobile equipment interface 140 is, for example, mobile phone, and mobile equipment interface 140 can pass through wired mode (ratio Such as USB or other style interfaces) it connects the movement equipment or (such as bluetooth, WIFI etc.) can connect wirelessly Connect the movement equipment.
Helmet-type virtual reality display 120 is used to receive the shifting for connecting so far interface 140 by mobile equipment interface 140 The screen display signal of dynamic equipment, predetermined display region is presented in by the screen of the movement equipment with virtual reality display mode Domain.
In this way, 3D camera assemblies are mounted on helmet-type virtual reality display, it is not required to by any handheld device, i.e., Equipment operation and scene operation based on bimanual input can be achieved.
The above-mentioned virtual reality interactive device with 3D camera assemblies according to embodiments of the present invention, utilizes 3D camera assemblies The testing image sequence of user's hand is captured, to identify the gesture of user, and then is carried out according to gesture has been identified to moving The manipulation of dynamic equipment.The virtual reality interactive device acquires the screen display signal of mobile equipment by mobile equipment interface, from And its screen is presented in predetermined display area by virtual reality display mode.It is set when user wears virtual reality interaction When standby, the virtual image of mobile device screen can be seen in the predetermined display area within its visual field, and know by gesture Human-computer interaction is carried out between mobile equipment otherwise, manipulates the movement equipment.Unlike the prior art, it is of the invention virtual Real interactive device, can be with other than it can carry out human-computer interaction using traditional input mode such as existing mouse, button Human-computer interaction is carried out using above-mentioned Gesture Recognition, enriches input mode type, and operate relatively simple.
According to a kind of realization method, helmet-type virtual reality display 120 can include wearing portion 210 and acquisition imaging section 220 (as shown in Figure 1 C).
Wherein, wearing portion 210 is wearable on user head, is provided with acquisition imaging section 220.Acquire imaging section 220 connections (being connected herein for electric signal) are connected to the movement of the movement equipment interface 140 to mobile equipment interface 140 with acquisition The screen of the movement equipment is presented in predetermined imaging region by the screen display signal of equipment with virtual reality display mode.
It acquires imaging section 220 and includes display screen and two groups of lens sets.Wherein, two groups of lens sets are configured to:When virtual existing When real interactive device 100 is worn on head by user, this two groups of lens sets are located at respectively immediately ahead of the corresponding sight of user, I.e. left lens group is located at immediately ahead of user's left eye sight, and right lens group is located at immediately ahead of user's right eye sight.This In the case of kind, predetermined display area is, for example, the virtual image forming region of this two groups of lens sets.
Acquisition imaging section 220 is connected to external mobile equipment by mobile equipment interface 140, acquires the movement equipment Screen display signal, the screen display signal is i.e. in the signal of content displayed on screen of handset, similar to desktop computer The display signal that display is received.After acquisition imaging section 220 receives above-mentioned screen display signal, pass through its internal display screen Show the screen content of mobile equipment according to the screen display signal, and by above-mentioned two groups of lens sets to the image into void Picture.After user wears above-mentioned virtual reality interactive device, what is seen by above-mentioned two groups of lens sets is the above-mentioned virtual image.It needs Illustrate, those skilled in the art can know how to set lens set according to general knowledge known in this field and open source information etc. The quantity and parameter of middle eyeglass, which is not described herein again.
According to a kind of realization method, the display screen that the display screen inside imaging section 220 for example can be transparent material is acquired, After user wears virtual reality interactive device, the gesture of oneself can be seen through the display screen, accurately to grasp oneself institute Do gesture and hand gesture location.
According to other realization methods, helmet-type virtual reality display 120 includes fixing bracket with being also an option that property.Gu Fixed rack regularly or is movably connected with to wearing portion 210, and the mobile equipment for being fixedly connected with mobile equipment interface 140. For example, card slot can be set in fixing bracket, for the mobile equipment of fixed such as mobile phone etc, the size of the card slot can root It is pre-set according to the size of mobile equipment, adjustable type card slot can also be made into and (for example elastic portion is set in card slot both sides Part).
Fig. 3 schematically shows a kind of example arrangement of signal processing component 130.As shown in figure 3, signal processing group Part 130 can include contour detecting unit 310, characteristic point sequence determination unit 320, action recognition unit 330, gesture identification list Member 340, instruction-determining unit 350 and execution unit 360.
Contour detecting unit 310 is used for according to image depth information and image color information, in the every of testing image sequence The hand profile of user is detected in frame image.Wherein, the hand profile detected may be both hands profile, it is also possible to single Handwheel is wide.
Characteristic point sequence determination unit 320 is used for every hand for user, using preset hand structure template, The characteristic point sequence to be measured of this hand is determined in every frame image of testing image sequence
Action recognition unit 330 is used for every hand for user, this is determined in multiple default characteristic point sequences The matching sequence of the characteristic point sequence to be measured of hand, to determine denomination of dive and the position of this hand according to matching sequence.
Gesture identification unit 340 is used for selection and the denomination of dive of user's both hands and position phase in default gesture table The gesture matched, as having identified gesture.
Instruction-determining unit 350 is used to, according to predetermined registration operation instruction catalogue, determine operational order corresponding with having identified gesture.
Execution unit 360 is for couple equipment progress relevant with a determining operational order behaviour corresponding with the operational order Make.Determining operational order is sent to relevant device as a result, can be realized to the relevant device of such as mobile computing device Personalize, naturalization, it is non-contacting operation and control.
As can be seen from the above description, virtual reality interactive device of the invention uses during gesture identification is carried out Action stencil matching and action pair realize that the precision height of identifying processing, speed are fast with the matched mode of gesture.
According to a kind of realization method, contour detecting unit 310 can be used for:For testing image sequenceIn Per frame imageThe color combining information deletion frame imageIn noise spot and non-area of skin color, utilize edge Detective operators E () is to obtained image after erased noise point and non-area of skin colorEdge detection is carried out, so as to To edge image
Edge imageAs only include the image of user's hand profile.
Wherein, in the processing procedure of " noise spot and non-area of skin color in the color combining information deletion frame image ", The noise spot in image can be deleted, and can be by calculating image using existing denoising methodMean value obtain skin Color region, then the region except area of skin color is non-area of skin color, you can realizes the deletion to non-area of skin color.For example, To imageMean value after, float up and down a range in the mean value, obtain including a color gamut of the mean value, work as figure Certain color value put is fallen within this color gamut as in, then the point is determined be colour of skin point, otherwise it is assumed that not being colour of skin point; All colour of skin points form area of skin color, remaining is non-area of skin color.
As a result, by the processing of contour detecting unit 310, the hand profile of user can be quickly detected, is improved The speed and efficiency entirely handled.
According to a kind of realization method, it is single that characteristic point sequence determination unit 320 can include template storage as shown in Figure 4 Member 410, template matches subelement 420 and sequence generation subelement 430.
Wherein, template storing sub-units 410 can be used for storing preset hand structure template.
According to a kind of realization method, hand structure template can include left-handed configuration template and right hand configurations template, left hand Stay in place form and right hand configurations template respectively include the topological relation between predetermined number characteristic point and each characteristic point.
In one example, left-handed configuration template and right hand configurations template can respectively include following 20 (as predetermined number Purpose example, but predetermined number is not limited to the numerical value such as 20 or 19,21) a characteristic point:The fingertip characteristic point (5 of each finger It is a), each articulations digitorum manus characteristic point (9), respectively refer to root joint characteristic point (5), wrist midpoint characteristic point (1).
As shown in figure 4, template matches subelement 420 can be directed to every hand of user, above-mentioned preset hand is utilized Stay in place form, respectively by the hand profile in every frame image of testing image sequence and hand structure template (tiled configuration template With right hand configurations template) it matched, be aligned, obtain predetermined number (such as 20) feature in the frame image hand profile Point.
Then, sequence generation subelement 430 can be directed to every hand of user, using this hand in testing image sequence Each frame image in corresponding predetermined number characteristic point (i.e. feature point set), obtain the characteristic point sequence to be measured of this hand.
In this way, pass through hand structure template and each hand profile (i.e. every frame figure of testing image sequence for before obtaining Hand profile as in) it carries out the processing such as matching, the predetermined number that can quickly and accurately obtain in each hand profile is special Sign point.Thereby, it is possible to subsequent processing using the predetermined number characteristic point in these profiles further to realize hand Gesture identifies, compared with the prior art, improves speed and the accuracy of entire man-machine dialogue system.
In the prior art, when needing to change (such as change, increase or decrease) to action according to different application scene Definition when, need to change algorithm and re -training grader;It in the present invention, can be only by adjusting action template (i.e. Default characteristic point sequence) realize the change to action definition, substantially increase the adaptability of Gesture Recognition.
In one example, template matches subelement 420 can include setting base determining module 510 as shown in Figure 5, Scale benchmark determining module 520 and scaling and deformation module 530.
According to the physiological structure feature of mankind's both hands, can mould be determined by setting base determining module 510, scaling benchmark Block 520 and scaling and deformation module 530 is portable to every takes 20 (example as predetermined number) a characteristic points.
For every frame image of testing image sequencePerform following handle:First, mould is determined by setting base Block 510 is according to the imageIn profile curvature of a curve find finger tip point in the contour line and refer to root artis;It connects It, the frame image that scaling benchmark determining module 520 has been found based on setting base determining module 510Contour line In setting base, each finger root artis that singly refers to of matching obtains benchmark of each length singly referred to as scaling;Most Afterwards, scaling and deformation module 530 are based on the finger tip point found and the position of finger root artis and obtained each length singly referred to Corresponding hand structure template is zoomed in and out parameter of both degree and deformation, remaining 10, every hand is obtained by matching Characteristic point, i.e., each articulations digitorum manus characteristic point and wrist midpoint characteristic point of every hand.
For example, looking for contour lineIn finger tip point and refer to root artis during, can be by its mean curvature most Big salient point is as finger tip point, using the concave point of maximum curvature as webs minimum point, and by each finger tip point to the finger tip point phase The distance between adjacent webs minimum point is defined as the corresponding unit length of the finger tip point.The webs adjacent to each two are minimum Point, by this 2 points midpoint, toward 1/3rd unit lengths of volar direction extension, (unit length at this time is thus between 2 points again The corresponding unit length of finger tip point) point, be defined as the finger tip point it is corresponding refer to root artis, it is hereby achieved that every hand The finger root artis of centre 3.It in addition to this, can be by during follow-up scaling and deformation for every hand Obtain two finger root artis of head and the tail of this hand;It alternatively, can also be adjacent by two (such as arbitrarily the selecting two) of this hand The distance between webs minimum point be used as finger reference width, then by two webs minimum points of the head and the tail of this hand respectively along cutting Line direction, extend outwardly half of finger reference width, and obtained point is respectively as two finger root artis of head and the tail of this hand.
It, can be by itself and hand structure mould it should be noted that if the salient point found for single hand is more than 5 Plate remove extra salient point during matching alignment.
As a result, by setting base determining module 510, scaling benchmark determining module 520 and scaling and deformation module 530, It can match to obtain 20 characteristic point Pl={ pl of the corresponding left hand of each frame image1, pl2..., pl20And 20 of the right hand Characteristic point Pr={ pr1, pr2..., pr20}.It should be noted that if user's gesture then passes through above only comprising single hand With it is obtained be 20 characteristic points (be known as feature point set) of the single hand in every frame image, i.e. Pl={ pl1, pl2..., pl20Or Pr={ pr1, pr2..., pr20}.Wherein, pl1,pl2,…,pl20The respectively position of 20 characteristic points of left hand), and pr1,pr2,…,pr20The respectively position of 20 characteristic points of the right hand.
If user's gesture includes both hands, the characteristic point sequence { Pl to be measured of left hand can be obtained by handling abovei,i =1,2 ..., M and the right hand characteristic point sequence { Pr to be measuredi, i=1,2 ..., M }.Wherein, PliIt is being treated for user's left hand Corresponding 20 (example as predetermined number) a characteristic point in i-th frame image of altimetric image sequence, and PriFor user's right hand Corresponding 20 (example as predetermined number) a characteristic point in the i-th frame image of testing image sequence.
If user's gesture, only comprising single hand, every frame image in the testing image sequence captured is only to include to be somebody's turn to do The image of single hand, so as to the characteristic point sequence to be measured by the way that the single hand can be obtained after handling above, i.e. { Pli, i= 1,2 ..., M } or { Pri, i=1,2 ..., M }.
According to a kind of realization method, action recognition unit 330 can include segmentation subelement 610 as shown in Figure 6, matching Sequence determination subelement 620, association subelement 630 and denomination of dive determination subelement 640.
As shown in fig. 6, segmentation subelement 610 can be directed to the characteristic point sequence to be measured of every hand, according to predetermined time window The characteristic point sequence to be measured is divided into multiple subsequences, and obtains the corresponding mean place of each subsequence.Wherein, per height The corresponding mean place of sequence can choose specific characteristic point (such as wrist midpoint or or other characteristic points) in the sub- sequence Mean place in row.Wherein, predetermined time window is about a singlehanded elemental motion (i.e. singlehanded holds, grabs) from starting knot The time of beam can set or can be determined by the method for experiment based on experience value, such as can be 2.5 seconds.
In one example, it is assumed that characteristic point sequence to be measured acquired in 10 seconds, and segmentation subelement 610 utilizes 2.5 The characteristic point sequence to be measured of the characteristic point sequence to be measured of left hand and the right hand can be divided into 4 sub- sequences respectively by the time window of second Row.With the characteristic point sequence { Pl to be measured of left handi, i=1,2 ..., M for the (characteristic point sequence { Pr to be measured of the right handi, i=1, 2 ..., M } it is similar with its, I will not elaborate), it is assumed that 10 frame images of acquisition per second, then characteristic point sequence to be measured is corresponding is 100 frame images, i.e. M=100, that is to say, that { Pli, i=1,2 ..., M } including 100 groups of feature point set Pl1、Pl2、…、 Pl100.In this way, by the time window of above-mentioned 2.5 seconds, it can be by { Pli, i=1,2 ..., M } it is divided into { Pli, i=1,2 ..., 25}、{Pli, i=25,26 ..., 50, { Pli, i=51,52 ..., 75 } and { Pli, i=76,77 ..., 100 } 4 sub- sequences Row, and each 25 frame image of correspondence of each subsequence, that is, each subsequence respectively includes 25 groups of feature point sets.Specific characteristic clicks Wrist midpoint is taken, with subsequence { Pli, i=1,2 ..., 25 } for (its excess-three sub- sequence handled to it is similar, here not It is described in detail again), wrist midpoint is in { Pli, i=1,2 ..., 25 } position in corresponding 25 groups of feature point sets is respectively position p1、 p2、…、p25, then wrist midpoint is in subsequence { Pli, i=1,2 ..., 25 in mean place be (p1+p2+…+p25)/ 25, as subsequence { Pli, i=1,2 ..., 25 } corresponding mean place.
Then, matching sequence determination subelement 620 can be directed to the corresponding each subsequence of every hand, by the subsequence with Each in multiple default characteristic point sequences is matched respectively, selection and the subsequence in multiple default characteristic point sequences Matching degree (matching threshold can set or can also pass through examination based on experience value higher than preset matching threshold The method tested determines) and the default characteristic point sequence of that maximum, matching sequence as the subsequence.Wherein, it matches Sequence determination subelement 620 can calculate the similarity between subsequence and default characteristic point sequence, be used as therebetween Matching degree.
Wherein, multiple default characteristic point sequences can be set in advance in a hand motion list of file names, the hand motion List of file names includes basic hand motion, such as:It waves, push away, drawing, opening and closing, turning etc., each action has unique name identification And the template represented with normalized hand-characteristic point sequence (i.e. default characteristic point sequence).It should be noted that for For the both hands of user, every hand all has there are one above-mentioned hand motion list of file names.That is, for left hand, Each action that the hand motion list of file names (abbreviation left hand acts list of file names) of left hand includes is in addition to being respectively provided with respective name Referred to as outer, also there are one left hand templates (i.e. a default characteristic point sequence of left hand) for tool;For the right hand, the hand of the right hand Each action that action list of file names (the abbreviation right hand acts list of file names) includes also has other than being respectively provided with respective title There are one right hand templates (i.e. a default characteristic point sequence of the right hand).
For example, multiple default characteristic point sequences of single hand are denoted as sequence A respectively1, sequence A2..., sequence AH, wherein, H The sequence number that above-mentioned multiple default characteristic point sequences for the single hand are included, then in the hand motion list of file names of the single hand In:The name identification of action 1 is sequence A for " waving " and corresponding template (i.e. default characteristic point sequence)1;The title mark of action 2 It is sequence A to know for " pushing away " and corresponding template1;…;The name identification for acting H is " turning " and corresponding template is sequence A1
It should be noted that for each subsequence, not necessarily this can be found in multiple default characteristic point sequences The corresponding matching sequence of subsequence.When some subsequence for single hand, which does not find it, matches sequence, then by the sub- sequence The matching sequence of row is denoted as " sky ", but the mean place of the subsequence can not be " sky ".According to a kind of realization method, if sub- sequence The matching sequence of row is " sky ", then is set as the mean place of the subsequence " sky ";According to another realization method, if subsequence Matching sequence for " sky ", the actual average position of the mean place of the subsequence for characteristic point specified in the subsequence;According to If the matching sequence of subsequence is " sky ", the mean place of the subsequence is set as "+∞ " for a kind of other realization methods.
In addition, according to a kind of realization method, if there is no specific characteristic point, (namely there is no the specific characteristics in subsequence The actual average position of point), the mean place of the subsequence can be set as "+∞ ".
Then, as shown in fig. 6, association subelement 630 can be by the corresponding mean place of each subsequence and the subsequence The corresponding denomination of dive of matching sequence be associated.
In this way, denomination of dive determination subelement 640 can be directed to every hand, by the matching of the corresponding each subsequence of this hand Sequence matches the corresponding denomination of dive of sequence (temporally as the corresponding multiple matching sequences of this hand, and by multiple After sequence sorts) multiple denomination of dive as this hand.
For example, it is assumed that it is { Pl for multiple subsequences of the characteristic point sequence to be measured of left handi, i=1,2 ..., 25 }, {Pli, i=25,26 ..., 50, { Pli, i=51,52 ..., 75 } and { Pli, i=76,77 ..., 100 }, respectively in left hand Multiple default characteristic point sequences in find { Pli, i=1,2 ..., 25, { Pli, i=25,26 ..., 50, { Pli, i= 51,52 ..., 75 matching sequence be followed successively by Pl1'、Pl2'、Pl3', and { Pl is not foundi, i=76,77 ..., 100 With sequence.Assuming that Pl1'、Pl2'、Pl3' left hand act list of file names in corresponding denomination of dive respectively be " waving ", " pushing away ", " drawing ", { Pli, i=1,2 ..., 25, { Pli, i=25,26 ..., 50, { Pli, i=51,52 ..., 75 } and { Pli, i= 76,77 ..., 100 } respective mean place is respectively pm1、pm2、pm3And pm4, then the denomination of dive of thus obtained left hand Include with position:" waving " (position pm1);" pushing away " (position pm2);" drawing " (position pm3);" sky " (position " pm4”).It should be noted that To being, in different embodiments, pm4May be actual position value, it is also possible to " sky " or "+∞ " etc..
As a result, by dividing subelement 610, matching sequence determination subelement 620, association subelement 630 and denomination of dive The processing of determination subelement 640 can obtain corresponding multiple denomination of dive (the action names as this hand of every hand of user Claim, that is to say, that the denomination of dive of this hand), and each denomination of dive is respectively associated that there are one mean places (as this hand Position, " position of this hand " includes one or more mean places, and quantity is identical with the quantity of denomination of dive).It compares For only identification technology of the identification individual part as gesture, identified using the processing of composition as shown in Figure 6 double The respective multiple actions of hand and position provide more flexible combination, on the one hand cause the accuracy of identification higher of gesture, separately On the one hand the gesture for making it possible to identification is more various, abundant.
In addition, according to a kind of realization method, can gesture identification unit 340 be realized by structure as shown in Figure 7 Processing.As shown in fig. 7, gesture identification unit 340 can include gesture table storing sub-units 710 and gesture table coupling subelement 720。
As shown in fig. 7, gesture identification unit 340 can make predefined one and the element of position two from two manually Map listing to gesture is stored as default gesture table:The left end each mapped is set title pair and each denomination of dive To position;The right end each mapped is a gesture HandSignal.
Wherein, " set title to " includes multiple denomination of dive pair, and each denomination of dive is to including left hand action name Claim ActNameleftWith right hand denomination of dive ActNameright, the position of each denomination of dive pair includes the opposite position of two hands It puts.
For example, in default gesture table, mapping one for (" drawing ", " sky "), (" drawing ", " drawing "), (" sky ", " conjunction "), (" sky ", " sky ") } (as element one), { (x1, y1), (x2, y2), (x3, y3), (x4, y4) (relative position, as element two) To the mapping of gesture " switch ";Mapping two for { (" drawing ", " drawing "), (" opening ", " opening "), (" sky ", " sky "), (" sky ", " sky ") }, {(x5, y5), (x6, y6), (x7, y7), (x8, y8) to the mapping of gesture " explosion ";Etc..Wherein, each action is to (such as (" drawing ", " sky ")) in the left side denomination of dive corresponding to left hand act, and the denomination of dive on the right correspond to the right hand action.
By taking mapping one as an example, (x1, y1) what is represented is between left hand first element " drawing " and right hand first element " sky " Relative position (acting the relative position that corresponding two hands are acted to left hand action and the right hand in (" drawing ", " sky ")); (x2, y2) what is represented is the relative position between second action " drawing " of second action " drawing " of left hand and the right hand;(x3, y3) table What is shown is the relative position between left hand third action " sky " and right hand third action " conjunction ";And (x4, y4) what is represented is left Relative position between the 4th action " sky " of the 4th action " sky " of hand and the right hand.Other mapping in elocutionary meaning with it is such Seemingly, it repeats no more.
In this way, gesture table coupling subelement 720 can be double by the left end each mapped in default gesture table and user The denomination of dive of hand and position are matched, and gesture matched with user's double-handed exercise Name & Location is used as and has been identified Gesture.
Wherein, the matching of denomination of dive performs stringent matching, that is, situation of verbatim account between two denomination of dive Lower the two denomination of dive of judgement are matched;And position is then that phase is calculated by the respective mean place of user's both hands To location information, so calculate the similarity between the relative position information and the position for mapping left end (such as can be with come what is realized A similarity threshold is set, judges that position is matched when the similarity of calculating is greater than or equal to the similarity threshold).
For example, it is assumed that by action recognition unit 330 obtain the respective denomination of dive of user's both hands for (" drawing ", " drawing "), (" opening ", " opening "), (" sky ", " sky "), (" sky ", " sky "), position be { (x11, y12)、(x21, y22)、(x31, y32)、 (x41, y42) (corresponding left hand);(x’11, y '12)、(x’21, y '22)、(x’31, y '32)、(x’41, y '42) (corresponding left hand).
In this way, gesture table coupling subelement 720 reflecting the denomination of dive of user's both hands and each in default gesture table The left end penetrated is matched.
When being matched with mapping one, it can be deduced that, the denomination of dive of user's both hands and moving for the left end of mapping one Make title mismatch, therefore ignore mapping one, continue matching mapping two.
When being matched with mapping two, it can be deduced that, the denomination of dive of user's both hands and moving for the left end of mapping two It exactly matches as title, then again matches the position of user's both hands with the relative position of the left end of mapping two.
During the progress of the relative position of the position of user's both hands and the left end of mapping two is matched, calculate first The relative position of user's both hands is as follows:{(x’11-x11, y '12-y12)、(x’21-x21, y '22-y22)、(x’31-x31, y '32- y32)、(x’41-x41, y '42-y42) (corresponding left hand).Then, by the above-mentioned relative position for the user's both hands being calculated with Map the relative position { (x of two left ends5, y5), (x6, y6), (x7, y7), (x8, y8) matched, that is, calculating { (x '11- x11, y '12-y12)、(x’21-x21, y '22-y22)、(x’31-x31, y '32-y32)、(x’41-x41, y '42-y42) (corresponding left hand) with {(x5, y5), (x6, y6), (x7, y7), (x8, y8) between similarity, it is assumed that the similarity being calculated be 95%.In the example In son, if similarity threshold is 80%, the relative position for the user's both hands being calculated and two left ends of mapping are then judged Relative position is matched.As a result, in this example embodiment, the result of human-computer interaction is " explosion ".
As a result, using gesture table coupling subelement 720, pass through the respective multiple actions of both hands and position and prearranged gesture table Between matching determine the gesture of user so that the precision of identification is higher;When according to different application scene need change (example Such as change, increase or decrease) to the definition of gesture when, do not need to modification algorithm or re -training grader, can only pass through The modes such as the gesture title or the corresponding denomination of dive of gesture in prearranged gesture table are adjusted to realize the change to definition of gesture, Substantially increase the adaptability of algorithm.
According to a kind of realization method, instruction-determining unit 350 can be established between a gesture title and operational order One mapping table, as above-mentioned predetermined registration operation instruction catalogue.The predetermined registration operation instruction catalogue includes multiple mappings, each maps Title of the left side for default gesture, and the right is to preset the corresponding operational order of gesture (such as mobile computing with this The basic operation instruction of equipment figure interface operation, such as focus is mobile, click, double-click, clicks dragging, amplification, diminution, rotation Turn, long touch etc.).That operational order corresponding with having identified gesture HandSignal can be obtained by table lookup operation as a result, OptCom。
In addition, according to another realization method, signal processing component 130 can be obtained based on the position of every hand of user The simulation figure of user's hand, the simulation is graphically displayed at connection so far interface 140 by mobile equipment interface 140 On the screen of mobile equipment.
For example, signal processing component 130 can be used for:According to every hand of user in every frame image of testing image sequence Corresponding characteristic point sequence to be measured (such as per frame image in every hand 20 characteristic points), obtained by extension after connecting bone The outer profile figure of this hand, the simulation figure as this hand;Translation school is carried out by the relative position to user's both hands Accurate and proportional zoom, determines display location of the every hand of user in the screen;Simulation based on every hand of user Figure and display location to show the simulation figure of user's hand in the screen of mobile equipment.
In this way, it is in virtual reality display mode by the screen of mobile equipment by helmet-type virtual reality display 120 Now in predetermined display area, user is enabled to see the screen of simulation figure for including more than hand in the predetermined display area Curtain content (virtual image), so as to determine whether its gesture is accurate according to hand simulation figure, to continue operating gesture or tune Whole gesture etc..
Thus, it is possible to by showing that translucent hand figure is anti-to provide the user with vision on the screen of the mobile device Feedback, and user is helped to adjust hand position and operation.It should be noted that " pass through the opposite position to user's both hands performing Put and carry out translation calibration and proportional zoom " processing when, if having identified in gesture single hand for only including user, do not deposit Relative position (or relative position is designated as infinity), at this point it is possible to the initial position specified at one show it is corresponding Single hand.In addition, " simulation figure and display location based on every hand of user show user's hand in screen performing During the processing of the simulation figure in portion ", if having identified, gesture includes both hands, shows the simulation figure of both hands;If hand is identified Gesture only comprising single hand, then only shows the simulation figure of this hand.
For example, in practical applications, 3D camera assemblies are mounted on helmet-type virtual reality display, and visual field downward, makes The physical slot that user lifts both hands is in visual field center.User lifts both hands and makes related gesture operation, you can:1st, exist The equipment operations such as menu selection are realized in virtual reality device;2nd, scene is realized by gesture in game or related software operation The operations such as navigation and scaling, rotation, the translation of object
Although the embodiment according to limited quantity describes the present invention, above description, the art are benefited from It is interior it is clear for the skilled person that in the scope of the present invention thus described, it can be envisaged that other embodiments.Additionally, it should be noted that The language that is used in this specification primarily to readable and introduction purpose and select rather than in order to explain or limit Determine subject of the present invention and select.Therefore, in the case of without departing from the scope and spirit of the appended claims, for this Many modifications and changes will be apparent from for the those of ordinary skill of technical field.For the scope of the present invention, to this The done disclosure of invention is illustrative and not restrictive, and it is intended that the scope of the present invention be defined by the claims appended hereto.

Claims (8)

  1. A kind of 1. virtual reality interactive device with 3D camera assemblies, which is characterized in that the virtual reality interactive device packet Include 3D camera assemblies, helmet-type virtual reality display, signal processing component and mobile equipment interface, the 3D camera assemblies The signal processing component is connected to, the signal processing component is connected to the mobile equipment interface, and the mobile equipment connects Mouth is connected to the helmet-type virtual reality display;
    The 3D camera assemblies are used to capture the testing image sequence of user's hand containing depth information, and will be described to be measured Image sequence is sent to the signal processing component,
    The signal processing component is used to obtain the gesture of the user, and according to the hand based on the testing image sequence Gesture determines corresponding operational order, to perform the operational order to the mobile equipment for being connected to the mobile equipment interface,
    The helmet-type virtual reality display is used to show by the screen that the mobile equipment interface receives the mobile equipment Show signal, the screen of the mobile equipment is presented in predetermined display area with virtual reality display mode;
    The signal processing component includes:Contour detecting unit, for according to image depth information and image color information, in institute State the hand profile that the user is detected in every frame image of testing image sequence;Characteristic point sequence determination unit, is used for For every hand of the user, using preset hand structure template, in every frame image of the testing image sequence Determine the characteristic point sequence to be measured of this hand;Action recognition unit, for being directed to every hand of the user, multiple default The matching sequence of the characteristic point sequence to be measured of this hand is determined in characteristic point sequence, to determine this hand according to the matching sequence Denomination of dive and position;Gesture identification unit, for selection and the action name of user's both hands in default gesture table Claim and the gesture that matches of position, as having identified gesture;Instruction-determining unit, for according to predetermined registration operation instruction catalogue, determining The corresponding operational order of gesture has been identified with described;Execution unit carries out for Dui with the determining relevant equipment of operational order Operation corresponding with the operational order;
    The gesture identification unit includes:Gesture table storing sub-units are used as described preset for storing following map listing Gesture table:The left end each mapped in the map listing is the position of set title pair and each denomination of dive pair, should The right end each mapped in map listing is a gesture;Gesture table coupling subelement, for will be in the default gesture table The left end each mapped matched with the denomination of dive of user's both hands and position, wherein, the matching of denomination of dive Stringent matching is performed, and position is then that relative position information is calculated by the respective mean place of user's both hands, and then The similarity between the relative position information and the position for mapping left end is calculated to realize.
  2. 2. the virtual reality interactive device according to claim 1 with 3D camera assemblies, which is characterized in that the helmet Formula virtual reality display includes:
    Wearing portion, the wearing portion are wearable on user head;
    Imaging section is acquired, the acquisition imaging section is set in the wearing portion, and be connected to the mobile equipment interface to adopt Collect the screen display signal of the mobile equipment, the screen is presented in the predetermined display with virtual reality display mode Region.
  3. 3. the virtual reality interactive device according to claim 2 with 3D camera assemblies, which is characterized in that the acquisition Imaging section includes display screen and two groups of lens groups, and the display screen is transparent material, and two groups of lens sets are configured to:Work as institute When stating virtual reality interactive device and being worn on head by user, two groups of lens sets are located at the corresponding sight of user respectively Front.
  4. 4. the virtual reality interactive device according to any one of claim 1-3 with 3D camera assemblies, feature exist In the characteristic point sequence determination unit includes:
    Template storing sub-units, for storing preset hand structure template;
    Template matches subelement for being directed to every hand of the user, using preset hand structure template, is treated described The predetermined number characteristic point of this hand is determined in the hand profile of every frame image of altimetric image sequence;
    Sequence generates subelement, for being directed to every hand of the user, using this hand in the testing image sequence Corresponding predetermined number characteristic point in each frame image obtains the characteristic point sequence to be measured of this hand.
  5. 5. the virtual reality interactive device according to claim 4 with 3D camera assemblies, which is characterized in that the template Coupling subelement includes:
    Setting base determining module is used for every frame image for the testing image sequence, according to the profile in the image Curvature of a curve finds the finger tip point in the contour line and refers to root artis, using the finger tip point as setting base;
    Benchmark determining module is scaled, is used for that treated per frame image for the setting base determining module, based on the frame The setting base found in image, the finger root artis that matching each singly refers to, obtains each length singly referred to and is used as ruler Spend the benchmark of scaling;
    Scaling and deformation module, are used for that treated per frame image for the scaling benchmark determining module, based on having found The finger tip point and the position for referring to root artis and each length singly referred to come to the corresponding hand structure template It zooms in and out and deformation, each articulations digitorum manus characteristic point and wrist midpoint characteristic point of every hand is obtained by matching;
    Wherein, the hand structure template of the template storing sub-units storage includes left-handed configuration template and right hand configurations mould Plate, the left-handed configuration template and right hand configurations template respectively include:The fingertip characteristic point of each finger, each articulations digitorum manus characteristic point, Respectively refer to the topological relation between root joint characteristic point, wrist midpoint characteristic point and each characteristic point.
  6. 6. the virtual reality interactive device according to any one of claim 1-3 with 3D camera assemblies, feature exist In the action recognition unit includes:
    Divide subelement, for being directed to the characteristic point sequence to be measured of every hand, according to predetermined time window by the characteristic point sequence to be measured Column split is multiple subsequences, and obtains the corresponding mean place of each subsequence;
    Match sequence determination subelement, for being directed to the corresponding each subsequence of every hand, by the subsequence with it is the multiple pre- If each in characteristic point sequence is matched respectively, selection and the subsequence in the multiple default characteristic point sequence Matching degree is higher than preset matching threshold and the default characteristic point sequence of maximum, the matching sequence as the subsequence;
    Subelement is associated with, for by the corresponding mean place of each subsequence action name corresponding with the matching sequence of the subsequence Claim associated;
    Denomination of dive determination subelement, for being directed to every hand, using the matching sequence of the corresponding each subsequence of this hand as this Corresponding multiple matching sequences of hand, and using multiple matching corresponding denomination of dive of sequence as the multiple dynamic of this hand Make title.
  7. 7. the virtual reality interactive device according to any one of claim 1-3 with 3D camera assemblies, feature exist In the signal processing component is additionally operable to:
    Position based on described every hand of user obtains the simulation figure of user's hand, to pass through the mobile equipment The simulation is graphically displayed on the screen of the mobile equipment by interface.
  8. 8. the virtual reality interactive device according to claim 7 with 3D camera assemblies, which is characterized in that the signal Processing component is used for:According to the corresponding characteristic point sequence to be measured of described every hand of user, obtained by extension after connecting bone The outer profile figure of this hand, the simulation figure as this hand;It is put down by the relative position to user's both hands Calibration and proportional zoom are moved, determines display location of the every hand of the user in the screen;Based on the user The simulation figure of every hand and display location show the simulation drawing of user's hand in the screen of the mobile equipment Shape.
CN201510563539.2A 2015-09-07 2015-09-07 A kind of virtual reality interactive device with 3D camera assemblies Expired - Fee Related CN105302295B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510563539.2A CN105302295B (en) 2015-09-07 2015-09-07 A kind of virtual reality interactive device with 3D camera assemblies

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510563539.2A CN105302295B (en) 2015-09-07 2015-09-07 A kind of virtual reality interactive device with 3D camera assemblies

Publications (2)

Publication Number Publication Date
CN105302295A CN105302295A (en) 2016-02-03
CN105302295B true CN105302295B (en) 2018-06-26

Family

ID=55199647

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510563539.2A Expired - Fee Related CN105302295B (en) 2015-09-07 2015-09-07 A kind of virtual reality interactive device with 3D camera assemblies

Country Status (1)

Country Link
CN (1) CN105302295B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106293395A (en) * 2016-08-03 2017-01-04 深圳市金立通信设备有限公司 A kind of virtual reality glasses and interface alternation method thereof
CN106325509A (en) * 2016-08-19 2017-01-11 北京暴风魔镜科技有限公司 Three-dimensional gesture recognition method and system
CN106293099A (en) * 2016-08-19 2017-01-04 北京暴风魔镜科技有限公司 Gesture identification method and system
CN106527696A (en) * 2016-10-31 2017-03-22 宇龙计算机通信科技(深圳)有限公司 Method for implementing virtual operation and wearable device
CN107479715A (en) * 2017-09-29 2017-12-15 广州云友网络科技有限公司 The method and apparatus that virtual reality interaction is realized using gesture control
CN108401452B (en) * 2018-02-23 2021-05-07 香港应用科技研究院有限公司 Apparatus and method for performing real target detection and control using virtual reality head mounted display system
CN109460150A (en) * 2018-11-12 2019-03-12 北京特种机械研究所 A kind of virtual reality human-computer interaction system and method
CN116360603A (en) * 2023-05-29 2023-06-30 中数元宇数字科技(上海)有限公司 Interaction method, device, medium and program product based on time sequence signal matching

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102629155A (en) * 2011-11-08 2012-08-08 北京新岸线网络技术有限公司 Method and device for implementing non-contact operation
CN103777748A (en) * 2012-10-26 2014-05-07 华为技术有限公司 Motion sensing input method and device
US9024842B1 (en) * 2011-07-08 2015-05-05 Google Inc. Hand gestures to signify what is important
CN104598915A (en) * 2014-01-24 2015-05-06 深圳奥比中光科技有限公司 Gesture recognition method and gesture recognition device
CN104750397A (en) * 2015-04-09 2015-07-01 重庆邮电大学 Somatosensory-based natural interaction method for virtual mine
CN205080498U (en) * 2015-09-07 2016-03-09 哈尔滨市一舍科技有限公司 Mutual equipment of virtual reality with 3D subassembly of making a video recording

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9024842B1 (en) * 2011-07-08 2015-05-05 Google Inc. Hand gestures to signify what is important
CN102629155A (en) * 2011-11-08 2012-08-08 北京新岸线网络技术有限公司 Method and device for implementing non-contact operation
CN103777748A (en) * 2012-10-26 2014-05-07 华为技术有限公司 Motion sensing input method and device
CN104598915A (en) * 2014-01-24 2015-05-06 深圳奥比中光科技有限公司 Gesture recognition method and gesture recognition device
CN104750397A (en) * 2015-04-09 2015-07-01 重庆邮电大学 Somatosensory-based natural interaction method for virtual mine
CN205080498U (en) * 2015-09-07 2016-03-09 哈尔滨市一舍科技有限公司 Mutual equipment of virtual reality with 3D subassembly of making a video recording

Also Published As

Publication number Publication date
CN105302295A (en) 2016-02-03

Similar Documents

Publication Publication Date Title
CN105045398B (en) A kind of virtual reality interactive device based on gesture identification
CN105302295B (en) A kind of virtual reality interactive device with 3D camera assemblies
CN105045399B (en) A kind of electronic equipment with 3D camera assemblies
CN105302294B (en) A kind of interactive virtual reality apparatus for demonstrating
KR102097190B1 (en) Method for analyzing and displaying a realtime exercise motion using a smart mirror and smart mirror for the same
CN113238650B (en) Gesture recognition and control method and device and virtual reality equipment
CN105487673B (en) A kind of man-machine interactive system, method and device
CN105068662B (en) A kind of electronic equipment for man-machine interaction
CN105160323B (en) A kind of gesture identification method
KR101844390B1 (en) Systems and techniques for user interface control
US20150084859A1 (en) System and Method for Recognition and Response to Gesture Based Input
CN102915111A (en) Wrist gesture control system and method
JP6165485B2 (en) AR gesture user interface system for mobile terminals
CN205080499U (en) Mutual equipment of virtual reality based on gesture recognition
CN105046249B (en) A kind of man-machine interaction method
CN105980965A (en) Systems, devices, and methods for touch-free typing
CN105068646B (en) The control method and system of terminal
CN107357428A (en) Man-machine interaction method and device based on gesture identification, system
KR20170086024A (en) Information processing device, information processing method, and program
CN104793731A (en) Information input method for wearable device and wearable device
CN105069444B (en) A kind of gesture identifying device
CN108829239A (en) Control method, device and the terminal of terminal
Aditya et al. Recent trends in HCI: A survey on data glove, LEAP motion and microsoft kinect
KR20110097504A (en) User motion perception method and apparatus
US20240249640A1 (en) Virtual tutorials for musical instruments with finger tracking in augmented reality

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 150016 Heilongjiang Province, Harbin Economic Development Zone haping Road District Dalian road and Xingkai road junction

Applicant after: HARBIN YISHE TECHNOLOGY Co.,Ltd.

Address before: 150016 Heilongjiang City, Harbin province Daoli District, quiet street, unit 54, unit 2, layer 4, No. 3

Applicant before: HARBIN YISHE TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180626

CF01 Termination of patent right due to non-payment of annual fee