CN105425945A - Unlocking processing method and system, unlocking control system and display equipment - Google Patents

Unlocking processing method and system, unlocking control system and display equipment Download PDF

Info

Publication number
CN105425945A
CN105425945A CN201510733867.2A CN201510733867A CN105425945A CN 105425945 A CN105425945 A CN 105425945A CN 201510733867 A CN201510733867 A CN 201510733867A CN 105425945 A CN105425945 A CN 105425945A
Authority
CN
China
Prior art keywords
described
interactive system
lock
state
gesture motion
Prior art date
Application number
CN201510733867.2A
Other languages
Chinese (zh)
Inventor
黄源浩
肖振中
钟亮洪
许宏淮
Original Assignee
深圳奥比中光科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳奥比中光科技有限公司 filed Critical 深圳奥比中光科技有限公司
Priority to CN201510733867.2A priority Critical patent/CN105425945A/en
Publication of CN105425945A publication Critical patent/CN105425945A/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Abstract

The invention relates to an unlocking processing method, which comprises the following steps: when an interaction system is under a locking state, detecting a gesture action which is done by a user; judging whether the detected gesture action corresponds to a preset unlocking action or not; if the detected gesture action corresponds to the preset unlocking action, converting an interaction system into an unlocking state; and otherwise, keeping the interaction system under a locking state. The method can carry out unlocking through a convenient and quick way. In addition, the invention also provides an unlocking processing system, an unlocking control system and display equipment in the unlocking control system.

Description

Unlock disposal route, system, solution lock control system and display device

Technical field

The present invention relates to field of computer technology, particularly relate to a kind of display device unlocking disposal route, system, solution lock control system and separate in lock control system.

Background technology

For the system or equipment that have employed LCDs and so on, usually need to unlock by means of the auxiliary device such as mouse and keyboard.If have certain distance between the auxiliary device such as mouse and keyboard and user, user is needed to stand up to operate mouse and keyboard.This can cause inconvenience to the user.How carry out unlocking by conveniently mode the technical matters becoming and need at present to solve.

Summary of the invention

Based on this, be necessary for above-mentioned technical matters, a kind of the unblock disposal route, system, solution lock control system and the display device separated in lock control system that are undertaken unlocking by conveniently mode are provided.

A kind of unblock disposal route, described method comprises:

When interactive system is in the lock state, detect the gesture motion that user makes;

Judge that whether the gesture motion detected is corresponding with predetermined unlocking motion;

If so, then described interactive system is converted to released state;

Otherwise, then described interactive system is remained on described lock-out state.

Wherein in an embodiment, the step of the gesture motion that described detection user makes comprises:

Camera head is utilized to obtain depth image and/or coloured image;

According to the gesture motion that described depth image and/or color image recognition user are made.

Wherein in an embodiment, described predetermined unlocking motion comprises hand and lifts towards camera head.

Wherein in an embodiment, described method also comprises:

When described interactive system is in the lock state, show the information corresponding with predetermined unlocking motion.

Wherein in an embodiment, described method also comprises:

When described interactive system is in the lock state, display video window;

Obtain user makes gesture motion video information according to described information;

Described video information is shown at described video window.

Wherein in an embodiment, described method also comprises:

If predetermined lock out action detected in Preset Time, then described interactive system is converted to lock-out state; If or can't detect hand in Preset Time, then described interactive system is converted to lock-out state.

A kind of unblock disposal system, described system comprises:

Detection module, for when interactive system is in the lock state, detects the gesture motion that user makes;

Judge module, whether corresponding with predetermined unlocking motion for judging the gesture motion detected.

Unlocked state, if corresponding with predetermined unlocking motion for the gesture motion detected, is then converted to released state by described interactive system;

Locking module, if not corresponding with predetermined unlocking motion for the gesture motion detected, then remains on described lock-out state by described interactive system.

Wherein in an embodiment, described detection module comprises:

Acquisition module, obtains depth image and/or coloured image for utilizing camera head;

Identification module, for the gesture motion made according to described depth image and/or color image recognition user.

Wherein in an embodiment, described predetermined unlocking motion comprises hand and lifts towards camera head.

Wherein in an embodiment, described system also comprises:

Display module, for when described interactive system is in the lock state, shows the information corresponding with predetermined unlocking motion.

Wherein in an embodiment, described display module also for when described interactive system is in the lock state, display video window; Described acquisition module also to make the video information of gesture motion according to described information for obtaining user; Described display module is also for showing described video information at described video window.

Wherein in an embodiment, if described locking module also for detecting predetermined lock out action in Preset Time, then described interactive system is converted to lock-out state; If or can't detect hand in Preset Time, then described interactive system is converted to lock-out state.

A kind of solution lock control system, described solution lock control system comprises:

Display device, is in the lock state or released state for showing interactive system;

Data processing equipment, for when interactive system is in the lock state, detects the gesture motion that user makes to described interactive system; If the gesture motion detected is corresponding with predetermined unlocking motion, then described interactive system is converted to released state; If the gesture motion detected is not corresponding with predetermined unlocking motion, then described interactive system is remained on described lock-out state.

Wherein in an embodiment, described solution lock control system also comprises:

Camera head, for obtaining depth image and/or the coloured image of user;

The gesture motion of described data processing equipment also for making according to described depth image and/or color image recognition user.

Wherein in an embodiment, described predetermined unlocking motion comprises hand and lifts towards camera head.

Wherein in an embodiment, when described interactive system is in the lock state, described display device is also for showing the information corresponding with predetermined unlocking motion.

Wherein in an embodiment, when described interactive system is in the lock state, described display device is also for display video window, and video information user being made gesture motion according to described information shows at described video window.

Wherein in an embodiment, if described data processing equipment also for detecting predetermined lock out action in Preset Time, then described interactive system is converted to lock-out state; If or can't detect hand in Preset Time, then described interactive system is converted to lock-out state.

Separate the display device in lock control system, described display device comprises:

Display device, is in the lock state or released state for showing interactive system;

Control device, for when described interactive system is in the lock state, in response to the gesture motion that the user detected makes to described display device;

If the gesture motion of described display device also for detecting is corresponding with predetermined unlocking motion, then showing described interactive system is released state; If the gesture motion detected is not corresponding with predetermined unlocking motion, then shows described interactive system and remain on lock-out state.

Wherein in an embodiment, described predetermined unlocking motion comprises hand and lifts towards camera head.

Wherein in an embodiment, when described interactive system is in the lock state, described display device is also for showing the information corresponding with predetermined unlocking motion.

Wherein in an embodiment, when described interactive system is in the lock state, described display device is also for display video window, and video information user being made gesture motion according to described information shows at described video window.

Wherein in an embodiment, described control device also in response to the predetermined lock out action detected in Preset Time, or can't detect hand in response in Preset Time, and described display device is also lock-out state for showing described interactive system.

Above-mentioned unblock disposal route, system, solution lock control system and the display device separated in lock control system, by detecting the gesture motion that user makes to interactive system, if gesture motion is corresponding with predetermined unlocking motion, interactive system can be converted to released state, do not postpone, provide users with the convenient, thus achieve and quickly and easily interactive system is unlocked.

Accompanying drawing explanation

Fig. 1 is the schematic diagram adopting the interactive system unlocking disposal route in an embodiment;

Fig. 2 is the process flow diagram that in an embodiment, interactive system unlocks disposal route;

Fig. 3 is the schematic diagram that in an embodiment, interactive system is in the lock state;

Fig. 4-1 is that in an embodiment, hand hangs the schematic diagram of state naturally;

Fig. 4-2 is the schematic diagram lifted by hand in an embodiment;

Hand is lifted the schematic diagram of rear the centre of the palm to camera head in an embodiment by Fig. 4-3;

Fig. 5 is the schematic diagram of the user interface shown after interactive system is converted to released state in an embodiment;

Fig. 6-1 is the schematic diagram that in an embodiment, animation points out hand naturally to hang state;

Fig. 6-2 is that in an embodiment, animation points out the schematic diagram lifted by hand;

Fig. 6-3 is that in an embodiment, hand is lifted the schematic diagram of rear the centre of the palm to camera head by animation prompting;

Fig. 7 is the schematic diagram by video window display video information in an embodiment;

Fig. 8 is the structural representation unlocking disposal system in an embodiment;

Fig. 9 is the structural representation of detection module in an embodiment;

Figure 10 is the structural representation of detection module in another embodiment;

Figure 11 is the structural representation separating lock control system in an embodiment;

Figure 12 is the structural representation separating lock control system in another embodiment;

Figure 13 is the structural representation separating the display device in lock control system in an embodiment.

Embodiment

The unblock disposal route provided in the embodiment of the present invention, can be applied in interactive system as shown in Figure 1.Interactive system comprises display screen, camera head, processor, storage medium, internal memory and I/O (input/output, I/O) subsystem etc.Wherein processor, storage medium are connected by system bus with internal memory, and display screen is connected with processor respectively by IO interface with camera head.Operating system and data processing equipment is stored in storage medium.It is Android (Android) operating system, (SuSE) Linux OS and Windows operating system etc. that operating system comprises.Processor is used for providing calculating and control ability, supports the operation of whole interactive system.The operation inside saving as the data processing equipment in storage medium provides environment.Camera head is for obtaining depth image in scene and/or coloured image.Processor is used for calculating depth image and/or coloured image, identifies the gesture motion that user makes according to this.If the gesture motion of user is corresponding with predetermined unlocking motion, then interactive system is converted to released state, realizes thus conveniently unlocking interactive system.

In one embodiment, as shown in Figure 2, provide a kind of unblock disposal route, the method is applied in the interactive system in Fig. 1, and the method specifically comprises:

Step 202, when interactive system is in the lock state, detects the gesture motion that user makes.

Interactive system can show user interface, can be shown the operation picture of interactive system by user interface.When interactive system is in the lock state, user interface can normally show operation picture, but for being operated interactive system by scheduled operation gesture.As shown in Figure 3, be schematic diagram that interactive system is in the lock state.When interactive system is in the lock state, the gesture motion that user makes to the camera head in interactive system can be detected.Concrete, camera head can be utilized to obtain the depth image of user in scene and/or coloured image, identify according to depth image and/or coloured image the gesture motion that user makes.The gesture motion that detection user makes can be carried out in real time.

Step 204, judges that whether the gesture motion detected is corresponding with predetermined unlocking motion, if so, then enters step 206, otherwise, enter step 208.

Predetermined unlocking motion can have multiple, comprises hand etc. of waving, raise one's hand and fall.Such as, hand is brandished by lower left upper right side, hand is brandished by lower right upper left side, fall etc. after hand being lifted and hand being lifted.Wherein, when being lifted by hand and/or after lifting, palm can be opened and also can hold.

In one embodiment, predetermined unlocking motion comprises hand and lifts towards camera head.Concrete, can be hand when lifting palm open, to camera head after hand lifts.Also can be that hand lifts rear palm and opens, palmar aspect be to camera head.This predetermined unlocking motion can effectively prevent from misreading lock.As shown in Fig. 4-1 to Fig. 4-3, for hand being lifted and the schematic diagram opened of palm.Wherein, as shown in Fig. 4-1, for hand hangs the schematic diagram of state naturally.As shown in the Fig. 4-2, the schematic diagram for hand is lifted.As shown in Fig. 4-3, for hand being lifted the schematic diagram of rear the centre of the palm to camera head.

Predetermined unlocking motion can be the continuous action that one hand completes, and also can be the continuous action that both hands complete.Wherein, the continuous action that both hands complete can be the continuous action that both hands complete simultaneously, also can be the continuous action that both hands complete respectively.The continuous action that both hands complete can be identical continuous action, also can be different continuous actions.

Step 206, is converted to released state by interactive system.

The gesture motion detected and predetermined unlocking motion are compared, if the gesture motion detected is corresponding with predetermined unlocking motion, then represents and receive unlock command, interactive system is converted to released state.Interactive system is converted to released state, thus user interface is converted to released state.After user interface is converted to released state, user can operate user interface.As shown in Figure 5, for interactive system be converted to released state after the user interface that shows.

Step 208, remains on lock-out state by interactive system.

The gesture motion detected and predetermined unlocking motion are compared, if the gesture motion detected is not corresponding with predetermined unlocking motion, then represents and do not receive unlock command, interactive system is remained on lock-out state.

In the present embodiment, when interactive system is in the lock state, judges that whether the gesture motion detected is corresponding with predetermined unlocking motion, if so, then interactive system is converted to released state; Otherwise, then interactive system is remained on lock-out state.By detecting the gesture motion that user makes, if gesture motion is corresponding with predetermined unlocking motion, interactive system can be converted to released state, not postpone, providing users with the convenient, thus achieve and quickly and easily interactive system is unlocked.

In one embodiment, the step detecting the gesture motion that user makes comprises: utilize camera head to obtain depth image and/or coloured image; According to the gesture motion that depth image and/or color image recognition user are made.

The step detecting the gesture motion that user makes comprises: utilize camera head to obtain depth image and/or coloured image; According to the gesture motion that depth image and/or color image recognition user are made.

In the present embodiment, camera head can be one, also can be multiple.Camera head can be the camera of monocular or binocular.Camera head can be utilized first to obtain the depth image of user in scene and/or coloured image to identify the gesture motion that user makes.

Camera head can be utilized first to obtain the depth image of user in scene.Depth image, also referred to as range image, refers to from observation visual angle and looks, a kind of image that information contained by image is relevant to body surface distance in scene or a kind of image channel.In depth image, the gray-scale value of pixel corresponds to the depth value of every bit in scene.Calculate the three-dimensional spatial information of user's palm according to depth image, identify by three-dimensional spatial information the gesture motion that user makes.Further, the depth image of camera head user in real in scene and/or coloured image can be utilized, carry out according to depth image and/or coloured image the gesture motion that Real time identification user makes.The gesture motion of user can be identified thus fast.

Also camera head can be utilized first to obtain the coloured image of user in scene, calculate the profile of user's palm according to coloured image.Concrete, algorithm for pattern recognition can be adopted to be mated with the palm feature of people by the coloured image got, if the match is successful, then represent in this coloured image and comprise palm profile.The gesture motion that user makes is identified according to the movement of profile.The gesture motion of user can be identified thus fast.

Also camera head can be utilized to obtain the depth image of user in scene and coloured image respectively, the profile of user's palm is calculated according to coloured image, calculate the three-dimensional spatial information of user palm according to depth image, identify according to the profile of user's palm and three-dimensional spatial information the gesture motion that user makes thus.And then the accuracy identifying user's gesture motion can be improved further.

In one embodiment, after utilizing the step of camera head acquisition depth image, also comprise: multi-amplitude deepness image is weighted and on average obtains final depth image; According to the gesture motion that final depth image identification user makes to interactive system.

In the present embodiment, in order to the accuracy of the three-dimensional spatial information calculating user's palm can be improved further, multiple camera head can be adopted, such as, adopt the camera head of more than three.Multiple camera head is utilized to obtain multi-amplitude deepness image; Multi-amplitude deepness image is weighted and on average obtains final depth image; According to the gesture motion that final depth image identification user makes to interactive system.Before obtaining multi-amplitude deepness image by multiple camera head, need to demarcate multiple camera head, obtain multiple camera head position relationship each other.The imaging plane of usual multiple camera head is all in a plane.The correlation parameter of multiple camera head position relationship each other can be used for calculating the three-dimensional spatial information of user's palm.During shooting, multiple camera head should obtain the depth image of same object (comprising user) in scene.Multiple camera head triggers shooting to ensure to obtain the state of same object at synchronization simultaneously.For obtained all depth images, be weighted and on average can obtain final depth image.Particularly, be weighted on average to the pixel of the correspondence on every amplitude deepness image, obtain weighted mean value as the degree of depth of impact point corresponding on final depth image, namely calculate the three-dimensional spatial information of the final user's palm in final depth image.Utilize the gesture motion that the three-dimensional spatial information identification user of the final user's palm calculated makes to interactive system.Which thereby enhance the accuracy identifying user's gesture motion.

In one embodiment, the method also comprises: when interactive system is in the lock state, and shows the information corresponding with predetermined unlocking motion.

In the present embodiment, information comprises word, picture, animation and video etc.Information can show on the screen of interactive system.Can show according to the size of information, show after also information can being zoomed in or out.Screen one place of information in interactive system can be shown, also can full screen display.Screen one place of information in interactive system can be shown in the centre of interactive system, also can show at the edge of the screen of interactive system or one jiao of place.As shown in Fig. 6-1 to Fig. 6-3, for animation is as the schematic diagram of information.Wherein, animation is positioned at the lower right corner of interactive system.As in Figure 6-1, for animation prompting hand hangs the schematic diagram of state naturally.As in fig. 6-2, for animation points out the schematic diagram lifted by hand.As shown in Fig. 6-3, for hand is lifted the schematic diagram of rear the centre of the palm to camera head by animation prompting.Owing to showing information corresponding to predetermined unlocking motion, provide reference for user makes the gesture motion corresponding with predetermined unlocking motion according to information thus, provide conveniently for interactive system unlocks.

In one embodiment, the method also comprises: when interactive system is in the lock state, display video window; Obtain the gesture motion that user makes according to information; Gesture motion is shown at video window.

In the present embodiment, when interactive system is in the lock state, can display reminding information and video window on the screen of interactive system.User can make gesture motion according to information to camera head, and the video information of the gesture motion utilizing camera head collection user to make, is shown video information by video window, as shown in Figure 7.User can be facilitated thus to see whether the gesture motion made their own is consistent with information, conveniently unlocks by video window.

In one embodiment, the method also comprises: if predetermined lock out action detected in Preset Time, then interactive system is converted to lock-out state; If or can't detect hand in Preset Time, then interactive system is converted to lock-out state.

In the present embodiment, interactive system can be converted to lock-out state.If predetermined lock out action detected in Preset Time, then interactive system is converted to lock-out state.Predetermined lock out action comprises and being fallen by the hand towards camera head, is removed by the hand towards camera head.Predetermined lock out action can be the continuous action that one hand completes, and also can be the continuous action that both hands complete.The gesture motion detected and predetermined unlocking lock are compared, if the gesture motion detected is corresponding with predetermined lock out action, then represents and receive lock instruction, interactive system is converted to lock-out state.Succinct interactive system is locked can be facilitated thus.

If can't detect hand in Preset Time, then interactive system is converted to lock-out state.If can't detect hand in Preset Time, then generate lock instruction, interactive system is converted to lock-out state.Further, can't detect hand to comprise and can't detect people.If can't detect people, then interactive system is converted to lock-out state.Succinct interactive system is locked can be facilitated thus.

As shown in Figure 8, in one embodiment, provide a kind of unblock disposal system, this system comprises: detection module 802, judge module 804, unlocked state 806 and locking module 808, wherein:

Detection module 802, for when interactive system is in the lock state, detects the gesture motion that user makes.

Judge module 804, whether corresponding with predetermined unlocking motion for judging the gesture motion detected.

Unlocked state 806, if corresponding with predetermined unlocking motion for the gesture motion detected, is then converted to released state by interactive system.

Locking module 808, if not corresponding with predetermined unlocking motion for the gesture motion detected, then remains on lock-out state by interactive system.

In one embodiment, as shown in Figure 9, detection module 802 comprises: acquisition module 802a and identification module 802b, wherein:

Acquisition module 802a, obtains depth image for utilizing camera head.

Identification module 802b, for the gesture motion made according to depth image identification user.

In one embodiment, described predetermined unlocking motion comprises hand and lifts towards camera head.

In one embodiment, as shown in Figure 10, this system also comprises: display module 810, for when interactive system is in the lock state, shows the information corresponding with predetermined unlocking motion.

In one embodiment, display module 810 also for when interactive system is in the lock state, display video window; Acquisition module 802a also to make the video information of gesture motion according to information for obtaining user; Display module 810 is also for showing video information at video window.

In one embodiment, if locking module 808 locking module also for detecting predetermined lock out action in Preset Time, then interactive system is converted to lock-out state; If or can't detect hand in Preset Time, then interactive system is converted to lock-out state.

Because this interactive system unlocks the disposal system principle of dealing with problems and aforementioned a kind of interactive system, to unlock disposal route similar, and therefore the enforcement of this system can see the enforcement of preceding method, and repetition part repeats no more.

In one embodiment, as shown in figure 11, provide a kind of solution lock control system, comprising: display device 1102 and data processing equipment 1104, wherein:

Display device 1102, is in the lock state or released state for showing interactive system.Data processing equipment 1104, for when interactive system is in the lock state, detects the gesture motion that user makes to interactive system; If the gesture motion detected is corresponding with predetermined unlocking motion, then interactive system is converted to released state; If the gesture motion detected is not corresponding with predetermined unlocking motion, then interactive system is remained on described lock-out state.

In the present embodiment, display device comprises LCD (LiquidCrystalDisplay, LCDs) display screen, LED (LightEmittingDiode, light emitting diode) display screen and 3D (threedimensional, three dimensions) projection screen etc.Display device display user interface, and the operation picture of interactive system can be shown by user interface.State residing for interactive system comprises lock-out state and released state etc.When interactive system is in the lock state, user interface can normally show operation picture, but for being operated interactive system by scheduled operation gesture.When interactive system is in the lock state, the gesture motion that data processing equipment is made to the camera head in interactive system for detecting user.Concrete, camera head can be utilized first to obtain the depth image of user in scene and/or coloured image, identify according to depth image and/or coloured image the gesture motion that user makes.Further, the depth image of camera head user in real in scene and/or coloured image can be utilized, carry out according to depth image and/or coloured image the gesture motion that Real time identification user makes.

Predetermined unlocking motion can have multiple, comprises hand etc. of waving, raise one's hand and fall.Predetermined unlocking motion can be the continuous action that one hand completes, and also can be the continuous action that both hands complete.If the gesture motion detected is corresponding with predetermined unlocking motion, then represent that tripper receives unlock command, is converted to released state by interactive system.If the gesture motion detected is not corresponding with predetermined unlocking motion, then represent that tripper does not receive unlock command, tripper does not unlock, and by locking device, interactive system is remained on lock-out state.

In one embodiment, predetermined unlocking motion comprises hand and lifts towards camera head.Concrete, can be hand when lifting palm open, to camera head after hand lifts.Also can be that hand lifts rear palm and opens, palmar aspect be to camera head.This predetermined unlocking motion can effectively prevent from misreading lock.

By detecting the gesture motion that user makes to the camera head in interactive system, if gesture motion is corresponding with predetermined unlocking motion, interactive system can be converted to released state, not postpone, provide users with the convenient, thus achieve and quickly and easily interactive system is unlocked.

In one embodiment, as shown in figure 12, this solution lock control system system also comprises: camera head 1106, for obtaining depth image and/or the coloured image of user; The gesture motion of data processing equipment 1104 also for utilizing depth image and/or color image recognition user to make to interactive system.

In the present embodiment, camera head can be one, also can be multiple.Camera head can be the camera of monocular or binocular.Camera head can be utilized first to obtain the depth image of user in scene and/or coloured image to identify the gesture motion that user makes.Camera head can be utilized first to obtain the depth image of user in scene.Calculate the three-dimensional spatial information of user's palm according to depth image, identify by three-dimensional spatial information the gesture motion that user makes.Also camera head can be utilized first to obtain the coloured image of user in scene, calculate the profile of user's palm according to coloured image.The gesture motion that user makes is identified according to the movement of profile.The gesture motion of user can be identified thus fast.

Further, camera head can also be utilized to obtain the depth image of user in scene and coloured image respectively, the profile of user's palm is calculated according to coloured image, calculate the three-dimensional spatial information of user palm according to depth image, identify according to the profile of user's palm and three-dimensional spatial information the gesture motion that user makes thus.And then the accuracy identifying user's gesture motion can be improved further.

In one embodiment, data processing equipment 1104 also on average obtains final depth image for being weighted multi-amplitude deepness image; According to the gesture motion that final depth image identification user makes to interactive system.

In the present embodiment, in order to the accuracy of the three-dimensional spatial information calculating user's palm can be improved further, multiple camera head can be adopted, such as, adopt the camera head of more than three.Before multiple camera head obtains multi-amplitude deepness image, need to demarcate multiple camera head, obtain multiple camera head position relationship each other.During shooting, multiple camera head should obtain the depth image of same object (comprising user) in scene.Multiple camera head triggers shooting to ensure to obtain the state of same object at synchronization simultaneously.Data processing equipment, for obtained all depth images, is weighted and on average can obtains final depth image, and has calculated the three-dimensional spatial information of the final user's palm in final depth image.Utilize the gesture motion that the three-dimensional spatial information identification user of the final user's palm calculated makes to interactive system.Which thereby enhance the accuracy identifying user's gesture motion.

In one embodiment, when interactive system is in the lock state, display device 1102 is also for showing the information corresponding with predetermined unlocking motion.

In the present embodiment, information comprises word, picture, animation and video etc.Information can show on display device 1102.Owing to showing information corresponding to predetermined unlocking motion, provide reference for user makes the gesture motion corresponding with predetermined unlocking motion according to information thus, provide conveniently for interactive system unlocks.

In one embodiment, when interactive system is in the lock state, the video information that user makes gesture motion according to information also for display video window, and shows at video window by display device 1102.

In the present embodiment, when interactive system is in the lock state, can display reminding information and video window on display device 1102.User can make gesture motion according to information to interactive system, is gathered the video information of the gesture motion that user makes to interactive system, video information shown by video window by camera head.User can be facilitated thus to see whether the gesture motion made their own is consistent with information, conveniently unlocks by video window.

In one embodiment, if data processing equipment 1104 also for detecting predetermined lock out action in Preset Time, then interactive system is converted to lock-out state; If or can't detect hand in Preset Time, then interactive system is converted to lock-out state.

In the present embodiment, interactive system can be converted to lock-out state.If predetermined lock out action detected in Preset Time, then interactive system is converted to lock-out state.Predetermined lock out action comprises and being fallen by the hand towards camera head, is removed by the hand towards camera head.Predetermined lock out action can be the continuous action that one hand completes, and also can be the continuous action that both hands complete.The gesture motion detected and predetermined unlocking lock are compared, if the gesture motion detected is corresponding with predetermined lock out action, then represents and receive lock instruction, interactive system is converted to lock-out state.Further, can't detect hand to comprise and can't detect people.If can't detect people, then interactive system is converted to lock-out state.Succinct interactive system is locked can be facilitated thus.

As shown in figure 13, in one embodiment, provide a kind of display device separated in lock control system, this display device comprises: display device 1302 and control device 1304, wherein:

Display device 1302, is in the lock state or released state for showing interactive system.

Control device 1304, for when interactive system is in the lock state, in response to the gesture motion that the user detected makes to described display device.

If the gesture motion of display device 1302 also for detecting is corresponding with predetermined unlocking motion, then showing interactive system is released state; If the gesture motion detected is not corresponding with predetermined unlocking motion, then shows interactive system and remain on lock-out state.

In the present embodiment, display device is connected with camera head, data processing equipment.Display device display interactive system is in the lock state or released state.When display device is in the lock state, user can make gesture motion to camera head.Obtain the depth image of user in scene and/or coloured image by camera head, identify according to depth image and/or coloured image the gesture motion that user makes.The gesture motion that control device in display device is made in response to user.If the gesture motion identified is corresponding with predetermined unlocking motion, then display device is shown as released state by display device 1302; If the gesture motion identified is not corresponding with predetermined unlocking motion, then display device 1302 shows interactive system and remains on lock-out state.

In one embodiment, predetermined unlocking motion comprises hand and lifts towards camera head.Control device 1304 lifts the continuous action towards camera head in response to the hand detected, display device is shown as released state by display device 13022.Effectively can prevent maloperation thus.

By detecting the gesture motion that user makes, if gesture motion is corresponding with predetermined unlocking motion, interactive system can be converted to released state, not postpone, providing users with the convenient, thus achieve and quickly and easily interactive system is unlocked.

In one embodiment, when interactive system is in the lock state, display device 1302 is also for showing the information corresponding with predetermined unlocking motion.

In the present embodiment, information comprises word, picture, animation and video etc.Information can show on the display apparatus.Owing to showing information corresponding to predetermined unlocking motion, provide reference for user makes the gesture motion corresponding with predetermined unlocking motion according to information thus, provide conveniently for interactive system unlocks.

In one embodiment, when interactive system is in the lock state, the video information that user makes gesture motion according to information also for display video window, and shows at video window by display device 1302.

In the present embodiment, when interactive system is in the lock state, on the display apparatus can display reminding information and video window.User can make gesture motion according to information to camera head, is gathered the video information of the gesture motion that user makes, video information shown by video window by camera head.User can be facilitated thus to see whether the gesture motion made their own is consistent with information, conveniently unlocks by video window.

In one embodiment, control device 1304 also in response to the predetermined lock out action detected in Preset Time, or can't detect hand in response in Preset Time, and display device 1302 is also lock-out state for showing interactive system.

In the present embodiment, it is lock-out state that display device can show interactive system.Control device 1304 is also in response to the predetermined lock out action detected in Preset Time, and display device 1302 is lock-out state for showing interactive system.Control device 1304 can't detect hand in response in Preset Time, and display device 1302 is also lock-out state for showing interactive system.Further, can't detect hand to comprise and can't detect people.Control device 1304 can't detect people in response in Preset Time, and display device 1302 is also lock-out state for showing interactive system.Can realize thus facilitating succinct locking interactive system.

Each technical characteristic of the above embodiment can combine arbitrarily, for making description succinct, the all possible combination of each technical characteristic in above-described embodiment is not all described, but, as long as the combination of these technical characteristics does not exist contradiction, be all considered to be the scope that this instructions is recorded.

The above embodiment only have expressed several embodiment of the present invention, and it describes comparatively concrete and detailed, but can not therefore be construed as limiting the scope of the patent.It should be pointed out that for the person of ordinary skill of the art, without departing from the inventive concept of the premise, can also make some distortion and improvement, these all belong to protection scope of the present invention.Therefore, the protection domain of patent of the present invention should be as the criterion with claims.

Claims (23)

1. unlock a disposal route, described method comprises:
When interactive system is in the lock state, detect the gesture motion that user makes;
Judge that whether the gesture motion detected is corresponding with predetermined unlocking motion;
If so, then described interactive system is converted to released state;
Otherwise, then described interactive system is remained on described lock-out state.
2. method according to claim 1, is characterized in that, the step of the gesture motion that described detection user makes comprises:
Camera head is utilized to obtain depth image and/or coloured image;
According to the gesture motion that described depth image and/or color image recognition user are made.
3. method according to claim 1, is characterized in that, described predetermined unlocking motion comprises hand and lifts towards camera head.
4. method according to claim 1, is characterized in that, described method also comprises:
When described interactive system is in the lock state, show the information corresponding with predetermined unlocking motion.
5. method according to claim 4, is characterized in that, described method also comprises:
When described interactive system is in the lock state, display video window;
Obtain user makes gesture motion video information according to described information;
Described video information is shown at described video window.
6. method according to claim 1, is characterized in that, described method also comprises:
If predetermined lock out action detected in Preset Time, then described interactive system is converted to lock-out state; If or can't detect hand in Preset Time, then described interactive system is converted to lock-out state.
7. unlock a disposal system, it is characterized in that, described system comprises:
Detection module, for when interactive system is in the lock state, detects the gesture motion that user makes;
Judge module, whether corresponding with predetermined unlocking motion for judging the gesture motion detected;
Unlocked state, if corresponding with predetermined unlocking motion for the gesture motion detected, is then converted to released state by described interactive system;
Locking module, if not corresponding with predetermined unlocking motion for the gesture motion detected, then remains on described lock-out state by described interactive system.
8. system according to claim 7, is characterized in that, described detection module comprises:
Acquisition module, obtains depth image and/or coloured image for utilizing camera head;
Identification module, for the gesture motion made according to described depth image and/or color image recognition user.
9. system according to claim 7, is characterized in that, described predetermined unlocking motion comprises hand and lifts towards camera head.
10. system according to claim 7, is characterized in that, described system also comprises:
Display module, for when described interactive system is in the lock state, shows the information corresponding with predetermined unlocking motion.
11. systems according to claim 10, is characterized in that, described display module also for when described interactive system is in the lock state, display video window; Described acquisition module also to make the video information of gesture motion according to described information for obtaining user; Described display module is also for showing described video information at described video window.
12. systems according to claim 7, is characterized in that, if described locking module also for detecting predetermined lock out action in Preset Time, then described interactive system are converted to lock-out state; If or can't detect hand in Preset Time, then described interactive system is converted to lock-out state.
Separate lock control system for 13. 1 kinds, it is characterized in that, described solution lock control system comprises:
Display device, is in the lock state or released state for showing interactive system;
Data processing equipment, for when interactive system is in the lock state, detects the gesture motion that user makes to described interactive system; If the gesture motion detected is corresponding with predetermined unlocking motion, then described interactive system is converted to released state; If the gesture motion detected is not corresponding with predetermined unlocking motion, then described interactive system is remained on described lock-out state.
14. solution lock control systems according to claim 13, is characterized in that, described solution lock control system also comprises:
Camera head, for obtaining depth image and/or the coloured image of user;
The gesture motion of described data processing equipment also for making according to described depth image and/or color image recognition user.
15. solution lock control systems according to claim 13, is characterized in that, described predetermined unlocking motion comprises hand and lifts towards camera head.
16. solution lock control systems according to claim 13, is characterized in that, when described interactive system is in the lock state, described display device is also for showing the information corresponding with predetermined unlocking motion.
17. solution lock control systems according to claim 16, it is characterized in that, when described interactive system is in the lock state, described display device is also for display video window, and video information user being made gesture motion according to described information shows at described video window.
18. solution lock control systems according to claim 13, is characterized in that, if described data processing equipment also for detecting predetermined lock out action in Preset Time, then described interactive system are converted to lock-out state; If or can't detect hand in Preset Time, then described interactive system is converted to lock-out state.
Separate the display device in lock control system, it is characterized in that for 19. 1 kinds, described display device comprises:
Display device, is in the lock state or released state for showing interactive system;
Control device, for when described interactive system is in the lock state, in response to the gesture motion that the user detected makes to described display device;
If the gesture motion of described display device also for detecting is corresponding with predetermined unlocking motion, then showing described interactive system is released state; If the gesture motion detected is not corresponding with predetermined unlocking motion, then shows described interactive system and remain on lock-out state.
20. display devices according to claim 19, is characterized in that, described predetermined unlocking motion comprises hand and lifts towards camera head.
21. display devices according to claim 19, is characterized in that, when described interactive system is in the lock state, described display device is also for showing the information corresponding with predetermined unlocking motion.
22. display devices according to claim 20, it is characterized in that, when described interactive system is in the lock state, described display device is also for display video window, and video information user being made gesture motion according to described information shows at described video window.
23. display devices according to claim 19, it is characterized in that, described control device also in response to the predetermined lock out action detected in Preset Time, or can't detect hand in response in Preset Time, and described display device is also lock-out state for showing described interactive system.
CN201510733867.2A 2015-11-02 2015-11-02 Unlocking processing method and system, unlocking control system and display equipment CN105425945A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510733867.2A CN105425945A (en) 2015-11-02 2015-11-02 Unlocking processing method and system, unlocking control system and display equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510733867.2A CN105425945A (en) 2015-11-02 2015-11-02 Unlocking processing method and system, unlocking control system and display equipment

Publications (1)

Publication Number Publication Date
CN105425945A true CN105425945A (en) 2016-03-23

Family

ID=55504202

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510733867.2A CN105425945A (en) 2015-11-02 2015-11-02 Unlocking processing method and system, unlocking control system and display equipment

Country Status (1)

Country Link
CN (1) CN105425945A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650392A (en) * 2016-11-11 2017-05-10 捷开通讯(深圳)有限公司 VR headset device and unlock method
CN108668021A (en) * 2018-04-25 2018-10-16 维沃移动通信有限公司 A kind of unlocking method and mobile terminal

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103336658A (en) * 2012-05-31 2013-10-02 腾讯科技(深圳)有限公司 Unlocking method and unlocking device for touch screen of terminal equipment
CN103365575A (en) * 2012-03-27 2013-10-23 百度在线网络技术(北京)有限公司 Mobile terminal and unlocking method thereof
CN104932697A (en) * 2015-06-30 2015-09-23 努比亚技术有限公司 Gesture unlocking method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103365575A (en) * 2012-03-27 2013-10-23 百度在线网络技术(北京)有限公司 Mobile terminal and unlocking method thereof
CN103336658A (en) * 2012-05-31 2013-10-02 腾讯科技(深圳)有限公司 Unlocking method and unlocking device for touch screen of terminal equipment
CN104932697A (en) * 2015-06-30 2015-09-23 努比亚技术有限公司 Gesture unlocking method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650392A (en) * 2016-11-11 2017-05-10 捷开通讯(深圳)有限公司 VR headset device and unlock method
CN108668021A (en) * 2018-04-25 2018-10-16 维沃移动通信有限公司 A kind of unlocking method and mobile terminal

Similar Documents

Publication Publication Date Title
JP6079832B2 (en) Human computer interaction system, hand-to-hand pointing point positioning method, and finger gesture determination method
EP2710554B1 (en) Head pose estimation using rgbd camera
US9489574B2 (en) Apparatus and method for enhancing user recognition
KR101724658B1 (en) Human detecting apparatus and method
US8467596B2 (en) Method and apparatus for object pose estimation
JP6013241B2 (en) Person recognition apparatus and method
JP5445460B2 (en) Impersonation detection system, impersonation detection method, and impersonation detection program
CN106022209B (en) A kind of method and device of range estimation and processing based on Face datection
JP4198602B2 (en) Operating method and system using video
CN103927016B (en) Real-time three-dimensional double-hand gesture recognition method and system based on binocular vision
KR101355974B1 (en) Method and devices for tracking multiple object
CN101793562B (en) Face detection and tracking algorithm of infrared thermal image sequence
KR101581954B1 (en) Apparatus and method for a real-time extraction of target's multiple hands information
US20130251215A1 (en) Electronic device configured to apply facial recognition based upon reflected infrared illumination and related methods
Liu et al. Hand gesture recognition using depth data
US7680295B2 (en) Hand-gesture based interface apparatus
US8564667B2 (en) Surveillance system
Doliotis et al. Hand shape and 3D pose estimation using depth data from a single cluttered frame
US7460705B2 (en) Head-top detecting method, head-top detecting system and a head-top detecting program for a human face
KR101877570B1 (en) Apparatus for setting parking position based on around view image and method thereof
TW505892B (en) System and method for promptly tracking multiple faces
CN106991377A (en) With reference to the face identification method, face identification device and electronic installation of depth information
DE112017000231T5 (en) Detection of liveliness for facial recognition with deception prevention
US20140369567A1 (en) Authorized Access Using Image Capture and Recognition System
CN103870802A (en) System and method for manipulating user interface in vehicle using finger valleys

Legal Events

Date Code Title Description
PB01 Publication
C06 Publication
SE01 Entry into force of request for substantive examination
C10 Entry into substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160323

RJ01 Rejection of invention patent application after publication