CN104571513A - Method and system for simulating touch instructions by shielding camera shooting area - Google Patents
Method and system for simulating touch instructions by shielding camera shooting area Download PDFInfo
- Publication number
- CN104571513A CN104571513A CN201410845233.1A CN201410845233A CN104571513A CN 104571513 A CN104571513 A CN 104571513A CN 201410845233 A CN201410845233 A CN 201410845233A CN 104571513 A CN104571513 A CN 104571513A
- Authority
- CN
- China
- Prior art keywords
- event
- simulated touch
- camera watch
- veil
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Studio Devices (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention provides a method and a system for simulating touch instructions by shielding a camera shooting area. The method comprises collecting the image sequence of a shielding operation captured by a camera; identifying the collected image sequence to separate the area with a shield and the area without a shield in every image, and converting every separated image into a binarized image; analyzing the change of the shield in every image of the image sequence to identify operational events happed in the image sequence; determining whether the identified operational events confirm to preset events in a preset event sequence; if so, generating corresponding simulative touch instructions; if not, not generating any simulative touch instructions; performing corresponding simulative touch operations according to the generated simulative touch instructions. The invention also provides the system corresponding to the method. The method and the system for simulating the touch instructions by shielding the camera shooting area greatly reduce the requirements on operational capability of the system compared with existing hand touch identification, and save keys or touch input elements compared with existing touch input manners.
Description
Technical field
The present invention relates to the input of a kind of internet-of-things terminal user instruction and equipment control field, being specifically related to a kind of by covering camera watch region simulated touch instruction method and system.
Background technology
Camera system is more extensively used various application scenarios, and derives the internet-of-things terminal product of various band camera function, such as the network monitoring camera head, baby monitor, drive recorder, video doorbell etc. of security protection.These internet of things equipments are except by remote subscriber by except the long-range use in internet, can also have the function used by near-end user direct (face-to-face) manipulation.In the intercommunication function of such as network monitoring camera head, near-end user can control intercommunication sound size; Part baby monitor with luminous lamp function then allow near-end user directly to adjust light and shade and the color of light; Drive recorder and video doorbell are also most with the direct function directly manipulated light or sound.As can be seen here, the partial function subsystem of near-end user to the equipment demand that has near-end to manipulate.
Existing camera system generally all possesses certain video identification arithmetic capability, therefore camera can be used to be used as input equipment, and existing way is by carrying out identification to realize manipulation to instruction input main body (staff touch) of user, but there is following defect in this way: the staff identification due to view-based access control model needs the identification computing relating to the colour of skin, hand shape, and operand is comparatively huge, therefore higher to the requirement of the arithmetic capability of embedded camera system; Moreover be exactly need to embed the near-end such as button or touch-screen input element, which increase the complicacy of system.
Summary of the invention
One of the technical problem to be solved in the present invention, be to provide a kind of by covering camera watch region simulated touch instruction method, by being different simulated touch instructions by covering operation map to the difference of camera watch region, and utilize the simulated touch instruction generated to control electronic equipment, thus realize non-tactile operation, and reduce the requirement to system operations ability.
One of the technical problem to be solved in the present invention is achieved in that a kind of by covering camera watch region simulated touch instruction method, and described method comprises the steps:
The image sequence of the primary shield operation that step 10, acquisition camera catch;
The image sequence that step 20, identification gather, will have veil and open without two region separation of veil in each image, and the image after each being separated all is converted to binary image;
In step 30, analysis image sequence, the situation of change of the veil of each image, identifies the Action Events that image sequence occurs;
Whether the Action Events that step 40, judgement recognize conforms to the predeterminable event in predeterminable event sequence, if so, then generates corresponding simulated touch instruction, and enters step 50; No, then do not generate simulated touch instruction, and return step 10 and wait for and cover operation next time;
Step 50, the simulated touch operation corresponding according to the simulated touch instruction execution generated.
Further, described step 30 is specially: if there is the region of veil large from little change in image sequence, and the ratio finally having the region of veil to occupy whole camera watch region reaches default ratio of the covering upper limit, be then identified as the event of covering, enter step 40 afterwards; If large from little change without the region of veil in image sequence, and the ratio that the final region without veil occupies whole camera watch region reaches default ratio of the not covering upper limit, be then identified as releasing and cover event, enter step 40 afterwards; If the ratio having the region of veil to occupy whole camera watch region does not finally reach default ratio of the covering upper limit, or the ratio that the region without veil occupies whole camera watch region does not finally reach default ratio of the not covering upper limit, then be not identified, and return step 10 and wait for and cover operation next time.
Further, cover event and releasing described in every and cover the equal tool of event event occurs direction and event generation speed; Described event occur direction comprise be parallel to described camera watch region and entry angle be β Linear-moving direction, disperse direction or the shrinkage direction perpendicular to described camera watch region, wherein 0 °≤β≤360 ° perpendicular to described camera watch region; Described event generation speed is the average velocity that in event generating process, each sets of speeds stage by stage or event occur.
Further, described predeterminable event comprises event type, event direction and pre-set velocity, and every described predeterminable event is all mapped with a simulated touch instruction; Described pre-set velocity is a value range with bound; Described simulated touch instruction comprises direction slip instruction and click commands.
Further, described step 40 is specially: judge whether the Action Events recognized conforms to the predeterminable event in predeterminable event sequence, if the event type of the Action Events recognized and predeterminable event, event direction and pre-set velocity conform to, just generate should the simulated touch instruction of Action Events, and enter step 50; If the event type of the Action Events recognized and predeterminable event, event direction and pre-set velocity do not conform to, then do not generate simulated touch instruction, and stop the process of this being covered to operation, return step 10 simultaneously and wait for and cover operation next time.
The technical problem to be solved in the present invention two, be to provide a kind of by covering camera watch region simulated touch order set, by being different simulated touch instructions by covering operation map to the difference of camera watch region, and utilize the simulated touch instruction generated to control electronic equipment, thus realize non-tactile operation, and reduce the requirement to system operations ability.
Two of the technical problem to be solved in the present invention is achieved in that a kind of by covering camera watch region simulated touch order set, and described system comprises image unit, image pre-processing unit, covers recognition unit, simulated touch instruction generation unit and simulated touch instruction execution unit;
Described image unit, for the image sequence of the primary shield operation that acquisition camera catches;
Described image pre-processing unit, for identifying the image sequence of collection, will have veil and open without two region separation of veil in each image, and the image after each being separated all is converted to binary image;
Describedly covering recognition unit, for analyzing the situation of change of the veil of each image in image sequence, identifying the Action Events that image sequence occurs;
Described simulated touch instruction generation unit, for judging whether the Action Events recognized conforms to the predeterminable event in predeterminable event sequence, if so, then generating corresponding simulated touch instruction, and entering described simulated touch instruction execution unit; No, then do not generate simulated touch instruction, and return described image unit wait for cover operation next time;
Described simulated touch instruction execution unit, for performing corresponding simulated touch operation according to the simulated touch instruction generated.
Further, the described recognition unit that covers is specially: if there is the region of veil large from little change in image sequence, and the ratio finally having the region of veil to occupy whole camera watch region reaches default ratio of the covering upper limit, then be identified as the event of covering, enter described simulated touch instruction generation unit afterwards; If large from little change without the region of veil in image sequence, and the ratio that the final region without veil occupies whole camera watch region reaches default ratio of the not covering upper limit, be then identified as releasing and cover event, enter described simulated touch instruction generation unit afterwards; If the ratio having the region of veil to occupy whole camera watch region does not finally reach default ratio of the covering upper limit, or the ratio that the region without veil occupies whole camera watch region does not finally reach default ratio of the not covering upper limit, then be not identified, and return image unit wait for cover operation next time.
Further, cover event and releasing described in every and cover the equal tool of event event occurs direction and event generation speed; Described event occur direction comprise be parallel to described camera watch region and entry angle be β Linear-moving direction, disperse direction or the shrinkage direction perpendicular to described camera watch region, wherein 0 °≤β≤360 ° perpendicular to described camera watch region; Described event generation speed is the average velocity that in event generating process, each sets of speeds stage by stage or event occur.
Further, described predeterminable event comprises event type, event direction and pre-set velocity, and every described predeterminable event is all mapped with a simulated touch instruction; Described pre-set velocity is a value range with bound; Described simulated touch instruction comprises direction slip instruction and click commands.
Further, described simulated touch instruction generation unit is specially: judge whether the Action Events recognized conforms to the predeterminable event in predeterminable event sequence, if the event type of the Action Events recognized and predeterminable event, event direction and pre-set velocity conform to, just generate should the simulated touch instruction of Action Events, and enter simulated touch instruction execution unit; If the event type of the Action Events recognized and predeterminable event, event direction and pre-set velocity do not conform to, then do not generate simulated touch instruction, and stop the process of this being covered to operation, return described image unit simultaneously and wait for and cover operation next time.
Tool of the present invention has the following advantages: be different simulated touch instructions by covering operation map to the difference of camera watch region, and utilizes the simulated touch instruction generated to control electronic equipment.Compared with existing staff touch recognition, the requirement of system operations ability is greatly reduced; Compared with existing touch input mode, save button or touch input element.
Accompanying drawing explanation
The present invention is further illustrated in conjunction with the embodiments with reference to the accompanying drawings.
Fig. 1 is the inventive method flowchart.
Fig. 2 is the schematic diagram that camera watch region of the present invention covers.
Fig. 3 is the schematic diagram of the event of covering of neutral line moving direction of the present invention (a angle enters).
Fig. 4 is the schematic diagram that event is covered in the releasing of neutral line moving direction of the present invention (b angle shifts out).
Fig. 5 is the schematic diagram of the event of covering of dispersing direction in the present invention.
Fig. 6 is the schematic diagram that event is covered in the releasing of shrinkage direction in the present invention.
Fig. 7 is present system structural representation.
Embodiment
Refer to shown in Fig. 1 to Fig. 6, an a kind of preferred embodiment by covering camera watch region simulated touch instruction method, described method comprises:
The image sequence of the primary shield operation that step 10, acquisition camera catch;
When user wishes to carry out near-end operation to electronic equipment, user just uses veil to press close to camera lens place at camera and implements to cover operation to camera lens, and camera will catch the image sequence that operation is covered in this time, and described image sequence intactly can cover the overall process of operation by recording user.For covering the veil of camera watch region, can be finger or any shielding plate, as shown in Figure 2, as long as the overlayable region of veil is greater than the camera watch region that diameter is D (lens).
Describedly cover operation and comprise:
I. veil never covers camera watch region to the process of covering camera watch region;
II. veil is from covering camera watch region to the process removed in camera watch region;
And above I and II two processes can occur on arbitrary direction.
Wherein do not cover camera watch region and refer to the image not occurring veil in camera watch region, cover camera watch region and refer to that camera watch region has the image of veil.
The image sequence that step 20, identification gather, will have veil and open without two region separation of veil in each image, and the image after each being separated all is converted to binary image;
Such as when identifying the image sequence gathered, can will the region recognition of veil be had to be foreground area, be background area by the region recognition without veil, each image so identified all will be converted into the binary image comprising foreground area and background area.
In step 30, analysis image sequence, the situation of change of the veil of each image, identifies the Action Events that image sequence occurs; This step 30 is specially:
If have the region of veil (i.e. foreground area) large from little change in image sequence, and the ratio finally having the region of veil to occupy whole camera watch region reaches default ratio of the covering upper limit, be then identified as the event of covering, enter step 40 afterwards; If large from little change without the region (i.e. background area) of veil in image sequence, and the ratio that the final region without veil occupies whole camera watch region reaches default ratio of the not covering upper limit, be then identified as releasing and cover event, enter step 40 afterwards; If the ratio having the region of veil to occupy whole camera watch region does not finally reach default ratio of the covering upper limit, or the ratio that the region without veil occupies whole camera watch region does not finally reach default ratio of the not covering upper limit, then be not identified, and return step 10 and wait for and cover operation next time.
Cover event and releasing described in every and cover the equal tool of event event occurs direction, cover event for what reflect that user implements to another orientation from a certain orientation of camera; Described event occur direction comprise be parallel to described camera watch region and entry angle be β Linear-moving direction, disperse direction or the shrinkage direction perpendicular to described camera watch region, wherein 0 °≤β≤360 ° perpendicular to described camera watch region.As shown in Figures 3 to 6, wherein, a masking process of the event of covering in the Linear-moving direction (a angle enters) being parallel to described camera watch region that what Fig. 3 represented is; The releasing in what Fig. 4 represented the is Linear-moving direction (b angle shifts out) being parallel to described camera watch region is covered of event and is removed masking process; A masking process of the event of covering in direction of dispersing of described camera watch region that what Fig. 5 represented be perpendicular to, namely veil is along moving perpendicular to described camera watch region and away from the direction of this camera watch region; The releasing of what Fig. 6 represented the be perpendicular to shrinkage direction of described camera watch region is covered of event and is removed masking process, and namely veil moves along perpendicular to described camera watch region and near the direction of this camera watch region.
Describedly cover event and releasing event of covering also has event speed, described event generation speed is the average velocity that in event generating process, each sets of speeds stage by stage or event occur, and this event speed is mainly used in the time response of description event.Use simple average velocity in the present embodiment to describe the event of covering and remove the speed degree covering event generation.
Whether the Action Events that step 40, judgement recognize conforms to the predeterminable event in predeterminable event sequence, if the event type of the Action Events recognized and predeterminable event, event direction and pre-set velocity conform to, just generate should the simulated touch instruction of Action Events, and enter step 50; If the event type of the Action Events recognized and predeterminable event, event direction and pre-set velocity do not conform to, then do not generate simulated touch instruction, and stop the process of this being covered to operation, return step 10 simultaneously and wait for and cover operation next time.Described simulated touch instruction comprises direction slip instruction and click commands.
The predeterminable event of the present embodiment comprises:
The event of covering (namely veil covers to the right along camera watch region level) in 0 degree of direction;
Event (namely veil is removed along camera watch region level and covered) is covered in the releasing in 0 degree of direction to the right;
The event of covering (namely veil covers straight up along camera watch region) in 90 degree of directions;
Event (namely veil is removed straight up along camera watch region and covered) is covered in the releasing in 90 degree of directions;
The event of covering (namely veil covers left along camera watch region level) in 180 degree of directions;
Event (namely veil is removed along camera watch region level and covered) is covered in the releasing in 180 degree of directions left;
The event of covering (namely veil covers straight down along camera watch region) in 270 degree of directions;
Event (namely veil is removed straight down along camera watch region and covered) is covered in the releasing in 270 degree of directions;
Disperse the event of covering (veil is near camera) in direction;
Event (veil is away from camera) is covered in the releasing of shrinkage direction.
Mapping relations in the present embodiment between predeterminable event and simulated touch instruction:
The event of covering in 0 degree of direction under pre-set velocity restriction adds that event is covered in the releasing in 0 degree of direction: level is slided to the right instruction;
The event of covering in 90 degree of directions under pre-set velocity restriction adds that event is covered in the releasing in 90 degree of directions: horizontal upward sliding instruction;
The event of covering in 180 degree of directions under pre-set velocity restriction adds that event is covered in the releasing in 180 degree of directions: level is slided left instruction:
The event of covering in 270 degree of directions under pre-set velocity restriction adds that event is covered in the releasing in 270 degree of directions: horizontal slide downward instruction:
Under pre-set velocity restriction disperse direction cover event: click commands;
Event is covered in the releasing of the shrinkage direction under pre-set velocity restriction: remove click commands.
Described pre-set velocity is a value range with bound, and the event of covering that the restriction of described pre-set velocity is lower and the releasing event of covering refer to: cover the lower limit that speed that event and releasing cover event is higher than pre-set velocity, and lower than the upper limit of pre-set velocity.
Step 50, the simulated touch operation corresponding according to the simulated touch instruction execution generated.
Such as, in a camera system with light function, can control with the upward sliding instruction generated to increase lamplight brightness, control to reduce lamplight brightness with the slide downward instruction generated.Again such as, in a camera system with music playback function, can control with the upward sliding instruction generated to increase volume, control to reduce volume with the slide downward instruction generated, control played songs to be switched to a upper head with the instruction of slip to the right generated, control played songs to be switched to next with the instruction of slip left generated, switch by click commands and releasing click commands and play and suspend broadcast state.
Refer to shown in Fig. 2 to Fig. 7, by covering a better enforcement of camera watch region simulated touch order set, described system comprises image unit, image pre-processing unit, covers recognition unit, simulated touch instruction generation unit and simulated touch instruction execution unit;
Described image unit, for the image sequence of the primary shield operation that acquisition camera catches;
When user wishes to carry out near-end operation to electronic equipment, user just uses veil to press close to camera lens place at camera and implements to cover operation to camera lens, and camera will catch the image sequence that operation is covered in this time, and described image sequence intactly can cover the overall process of operation by recording user.For covering the veil of camera watch region, can be finger or any shielding plate, as shown in Figure 2, as long as the overlayable region of veil is greater than the camera watch region that diameter is D (lens).
Describedly cover operation and comprise:
I. veil never covers camera watch region to the process of covering camera watch region;
II. veil is from covering camera watch region to the process removed in camera watch region;
And above I and II two processes can occur on arbitrary direction.
Wherein do not cover camera watch region and refer to the image not occurring veil in camera watch region, cover camera watch region and refer to that camera watch region has the image of veil
Described image pre-processing unit, for identifying the image sequence of collection, will have veil and open without two region separation of veil in each image, and the image after each being separated all is converted to binary image;
Such as when identifying the image sequence gathered, can will the region recognition of veil be had to be foreground area, be background area by the region recognition without veil, each image so identified all will be converted into the binary image comprising foreground area and background area.
Describedly covering recognition unit, for analyzing the situation of change of the veil of each image in image sequence, identifying the Action Events that image sequence occurs; This covers recognition unit and is specially:
If there is the region of veil (i.e. foreground area) large from little change in image sequence, and the ratio finally having the region of veil to occupy whole camera watch region reaches default ratio of the covering upper limit, then be identified as the event of covering, enter described simulated touch instruction generation unit afterwards; If the region (i.e. background area) without veil in image sequence is large from little change, and the ratio that the final region without veil occupies whole camera watch region reaches default ratio of the not covering upper limit, then be identified as releasing and cover event, enter described simulated touch instruction generation unit afterwards; If the ratio having the region of veil to occupy whole camera watch region does not finally reach default ratio of the covering upper limit, or the ratio that the region without veil occupies whole camera watch region does not finally reach default ratio of the not covering upper limit, then be not identified, and return image unit wait for cover operation next time.
Cover event and releasing described in every and cover the equal tool of event event occurs direction, cover event for what reflect that user implements to another orientation from a certain orientation of camera; Described event occur direction comprise be parallel to described camera watch region and entry angle be β Linear-moving direction, disperse direction or the shrinkage direction perpendicular to described camera watch region, wherein 0 °≤β≤360 ° perpendicular to described camera watch region.As shown in Figures 3 to 6, wherein, a masking process of the event of covering in the Linear-moving direction (a angle enters) being parallel to described camera watch region that what Fig. 3 represented is; The releasing in what Fig. 4 represented the is Linear-moving direction (b angle shifts out) being parallel to described camera watch region is covered of event and is removed masking process; A masking process of the event of covering in direction of dispersing of described camera watch region that what Fig. 5 represented be perpendicular to, namely veil is along moving perpendicular to described camera watch region and away from the direction of this camera watch region; The releasing of what Fig. 6 represented the be perpendicular to shrinkage direction of described camera watch region is covered of event and is removed masking process, and namely veil moves along perpendicular to described camera watch region and near the direction of this camera watch region.
Describedly cover event and releasing event of covering also has event speed, described event generation speed is the average velocity that in event generating process, each sets of speeds stage by stage or event occur, and this event speed is mainly used in the time response of description event.Use simple average velocity in the present embodiment to describe the event of covering and remove the speed degree covering event generation.
Described simulated touch instruction generation unit, for judging whether the Action Events recognized conforms to the predeterminable event in predeterminable event sequence, if the event type of the Action Events recognized and predeterminable event, event direction and pre-set velocity conform to, just generate should the simulated touch instruction of Action Events, and enter simulated touch instruction execution unit; If the event type of the Action Events recognized and predeterminable event, event direction and pre-set velocity do not conform to, then do not generate simulated touch instruction, and stop the process of this being covered to operation, return image unit simultaneously and wait for and cover operation next time.Described simulated touch instruction comprises direction slip instruction and click commands.
The predeterminable event of the present embodiment comprises:
The event of covering (namely veil covers to the right along camera watch region level) in 0 degree of direction;
Event (namely veil is removed along camera watch region level and covered) is covered in the releasing in 0 degree of direction to the right;
The event of covering (namely veil covers straight up along camera watch region) in 90 degree of directions;
Event (namely veil is removed straight up along camera watch region and covered) is covered in the releasing in 90 degree of directions;
The event of covering (namely veil covers left along camera watch region level) in 180 degree of directions;
Event (namely veil is removed along camera watch region level and covered) is covered in the releasing in 180 degree of directions left;
The event of covering (namely veil covers straight down along camera watch region) in 270 degree of directions;
Event (namely veil is removed straight down along camera watch region and covered) is covered in the releasing in 270 degree of directions;
Disperse the event of covering (veil is near camera) in direction;
Event (veil is away from camera) is covered in the releasing of shrinkage direction.
Mapping relations in the present embodiment between predeterminable event and simulated touch instruction:
The event of covering in 0 degree of direction under pre-set velocity restriction adds that event is covered in the releasing in 0 degree of direction: level is slided to the right instruction;
The event of covering in 90 degree of directions under pre-set velocity restriction adds that event is covered in the releasing in 90 degree of directions: horizontal upward sliding instruction;
The event of covering in 180 degree of directions under pre-set velocity restriction adds that event is covered in the releasing in 180 degree of directions: level is slided left instruction:
The event of covering in 270 degree of directions under pre-set velocity restriction adds that event is covered in the releasing in 270 degree of directions: horizontal slide downward instruction:
Under pre-set velocity restriction disperse direction cover event: click commands;
Event is covered in the releasing of the shrinkage direction under pre-set velocity restriction: remove click commands.
Described pre-set velocity is a value range with bound, and the event of covering that the restriction of described pre-set velocity is lower and the releasing event of covering refer to: cover the lower limit that speed that event and releasing cover event is higher than pre-set velocity, and lower than the upper limit of pre-set velocity.
Simulated touch instruction execution unit, for performing corresponding simulated touch operation according to the simulated touch instruction generated.
Such as, in a camera system with light function, can control with the upward sliding instruction generated to increase lamplight brightness, control to reduce lamplight brightness with the slide downward instruction generated.Again such as, in a camera system with music playback function, can control with the upward sliding instruction generated to increase volume, control to reduce volume with the slide downward instruction generated, control played songs to be switched to a upper head with the instruction of slip to the right generated, control played songs to be switched to next with the instruction of slip left generated, switch by click commands and releasing click commands and play and suspend broadcast state.
In sum, the present invention is different simulated touch instructions by covering operation map to the difference of camera watch region, and utilizes the simulated touch instruction generated to control electronic equipment.Compared with existing staff touch recognition, the requirement of system operations ability is greatly reduced; Compared with existing touch input mode, save button or touch input element.
Although the foregoing describe the specific embodiment of the present invention; but be familiar with those skilled in the art to be to be understood that; specific embodiment described by us is illustrative; instead of for the restriction to scope of the present invention; those of ordinary skill in the art, in the modification of the equivalence done according to spirit of the present invention and change, should be encompassed in scope that claim of the present invention protects.
Claims (10)
1., by covering a camera watch region simulated touch instruction method, it is characterized in that: comprise the steps:
The image sequence of the primary shield operation that step 10, acquisition camera catch;
The image sequence that step 20, identification gather, will have veil and open without two region separation of veil in each image, and the image after each being separated all is converted to binary image;
In step 30, analysis image sequence, the situation of change of the veil of each image, identifies the Action Events that image sequence occurs;
Whether the Action Events that step 40, judgement recognize conforms to the predeterminable event in predeterminable event sequence, if so, then generates corresponding simulated touch instruction, and enters step 50; No, then do not generate simulated touch instruction, and return step 10 and wait for and cover operation next time;
Step 50, the simulated touch operation corresponding according to the simulated touch instruction execution generated.
2. one according to claim 1 is by covering camera watch region simulated touch instruction method, it is characterized in that: described step 30 is specially: if there is the region of veil large from little change in image sequence, and the ratio finally having the region of veil to occupy whole camera watch region reaches default ratio of the covering upper limit, then be identified as the event of covering, enter step 40 afterwards; If large from little change without the region of veil in image sequence, and the ratio that the final region without veil occupies whole camera watch region reaches default ratio of the not covering upper limit, be then identified as releasing and cover event, enter step 40 afterwards; If the ratio having the region of veil to occupy whole camera watch region does not finally reach default ratio of the covering upper limit, or the ratio that the region without veil occupies whole camera watch region does not finally reach default ratio of the not covering upper limit, then be not identified, and return step 10 and wait for and cover operation next time.
3. one according to claim 2 is by covering camera watch region simulated touch instruction method, it is characterized in that: cover event and releasing described in every and cover the equal tool of event event occurs direction and event generation speed; Described event occur direction comprise be parallel to described camera watch region and entry angle be β Linear-moving direction, disperse direction or the shrinkage direction perpendicular to described camera watch region, wherein 0 °≤β≤360 ° perpendicular to described camera watch region; Described event generation speed is the average velocity that in event generating process, each sets of speeds stage by stage or event occur.
4. one according to claim 3 is by covering camera watch region simulated touch instruction method, it is characterized in that: described predeterminable event comprises event type, event direction and pre-set velocity, and every described predeterminable event is all mapped with a simulated touch instruction; Described pre-set velocity is a value range with bound; Described simulated touch instruction comprises direction slip instruction and click commands.
5. one according to claim 4 is by covering camera watch region simulated touch instruction method, it is characterized in that: described step 40 is specially: judge whether the Action Events recognized conforms to the predeterminable event in predeterminable event sequence, if the event type of the Action Events recognized and predeterminable event, event direction and pre-set velocity conform to, just generate should the simulated touch instruction of Action Events, and enter step 50; If the event type of the Action Events recognized and predeterminable event, event direction and pre-set velocity do not conform to, then do not generate simulated touch instruction, and stop the process of this being covered to operation, return step 10 simultaneously and wait for and cover operation next time.
6. by covering a camera watch region simulated touch order set, it is characterized in that: comprise image unit, image pre-processing unit, cover recognition unit, simulated touch instruction generation unit and simulated touch instruction execution unit;
Described image unit, for the image sequence of the primary shield operation that acquisition camera catches;
Described image pre-processing unit, for identifying the image sequence of collection, will have veil and open without two region separation of veil in each image, and the image after each being separated all is converted to binary image;
Describedly covering recognition unit, for analyzing the situation of change of the veil of each image in image sequence, identifying the Action Events that image sequence occurs;
Described simulated touch instruction generation unit, for judging whether the Action Events recognized conforms to the predeterminable event in predeterminable event sequence, if so, then generating corresponding simulated touch instruction, and entering described simulated touch instruction execution unit; No, then do not generate simulated touch instruction, and return described image unit wait for cover operation next time;
Described simulated touch instruction execution unit, for performing corresponding simulated touch operation according to the simulated touch instruction generated.
7. one according to claim 6 is by covering camera watch region simulated touch order set, it is characterized in that: described in cover recognition unit and be specially: if there is the region of veil large from little change in image sequence, and the ratio finally having the region of veil to occupy whole camera watch region reaches default ratio of the covering upper limit, then be identified as the event of covering, enter described simulated touch instruction generation unit afterwards; If large from little change without the region of veil in image sequence, and the ratio that the final region without veil occupies whole camera watch region reaches default ratio of the not covering upper limit, be then identified as releasing and cover event, enter described simulated touch instruction generation unit afterwards; If the ratio having the region of veil to occupy whole camera watch region does not finally reach default ratio of the covering upper limit, or the ratio that the region without veil occupies whole camera watch region does not finally reach default ratio of the not covering upper limit, then be not identified, and return image unit wait for cover operation next time.
8. one according to claim 7 is by covering camera watch region simulated touch order set, it is characterized in that: cover event and releasing described in every and cover the equal tool of event event occurs direction and event generation speed; Described event occur direction comprise be parallel to described camera watch region and entry angle be β Linear-moving direction, disperse direction or the shrinkage direction perpendicular to described camera watch region, wherein 0 °≤β≤360 ° perpendicular to described camera watch region; Described event generation speed is the average velocity that in event generating process, each sets of speeds stage by stage or event occur.
9. one according to claim 8 is by covering camera watch region simulated touch order set, it is characterized in that: described predeterminable event comprises event type, event direction and pre-set velocity, and every described predeterminable event is all mapped with a simulated touch instruction; Described pre-set velocity is a value range with bound; Described simulated touch instruction comprises direction slip instruction and click commands.
10. one according to claim 9 is by covering camera watch region simulated touch order set, it is characterized in that: described simulated touch instruction generation unit is specially: judge whether the Action Events recognized conforms to the predeterminable event in predeterminable event sequence, if the event type of the Action Events recognized and predeterminable event, event direction and pre-set velocity conform to, just generate should the simulated touch instruction of Action Events, and enter simulated touch instruction execution unit; If the event type of the Action Events recognized and predeterminable event, event direction and pre-set velocity do not conform to, then do not generate simulated touch instruction, and stop the process of this being covered to operation, return described image unit simultaneously and wait for and cover operation next time.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410845233.1A CN104571513A (en) | 2014-12-31 | 2014-12-31 | Method and system for simulating touch instructions by shielding camera shooting area |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410845233.1A CN104571513A (en) | 2014-12-31 | 2014-12-31 | Method and system for simulating touch instructions by shielding camera shooting area |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104571513A true CN104571513A (en) | 2015-04-29 |
Family
ID=53087791
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410845233.1A Pending CN104571513A (en) | 2014-12-31 | 2014-12-31 | Method and system for simulating touch instructions by shielding camera shooting area |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104571513A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109542315A (en) * | 2018-11-22 | 2019-03-29 | 维沃移动通信有限公司 | The control method and system of mobile terminal |
CN109791738A (en) * | 2016-10-07 | 2019-05-21 | 爱信艾达株式会社 | Driving assist system and computer program |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070201863A1 (en) * | 2006-02-28 | 2007-08-30 | Microsoft Corporation | Compact interactive tabletop with projection-vision |
CN102566827A (en) * | 2010-12-30 | 2012-07-11 | 株式会社理光 | Method and system for detecting object in virtual touch screen system |
CN102799344A (en) * | 2011-05-27 | 2012-11-28 | 株式会社理光 | Virtual touch screen system and method |
-
2014
- 2014-12-31 CN CN201410845233.1A patent/CN104571513A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070201863A1 (en) * | 2006-02-28 | 2007-08-30 | Microsoft Corporation | Compact interactive tabletop with projection-vision |
CN102566827A (en) * | 2010-12-30 | 2012-07-11 | 株式会社理光 | Method and system for detecting object in virtual touch screen system |
CN102799344A (en) * | 2011-05-27 | 2012-11-28 | 株式会社理光 | Virtual touch screen system and method |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109791738A (en) * | 2016-10-07 | 2019-05-21 | 爱信艾达株式会社 | Driving assist system and computer program |
CN109791738B (en) * | 2016-10-07 | 2021-12-21 | 爱信艾达株式会社 | Travel assist device and computer program |
CN109542315A (en) * | 2018-11-22 | 2019-03-29 | 维沃移动通信有限公司 | The control method and system of mobile terminal |
CN109542315B (en) * | 2018-11-22 | 2021-05-25 | 维沃移动通信有限公司 | Control method and system of mobile terminal |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11048333B2 (en) | System and method for close-range movement tracking | |
CN104349045B (en) | A kind of image-pickup method and electronic equipment | |
US9910498B2 (en) | System and method for close-range movement tracking | |
CN109032358B (en) | Control method and device of AR interaction virtual model based on gesture recognition | |
CN105425964B (en) | A kind of gesture identification method and system | |
CN105760102B (en) | Terminal interaction control method and device and application program interaction control method | |
WO2015099293A1 (en) | Device and method for displaying user interface of virtual input device based on motion recognition | |
CN103135748B (en) | The trigger control method of man-machine interactive operation instruction and system | |
CN108616712B (en) | Camera-based interface operation method, device, equipment and storage medium | |
WO2015137742A1 (en) | Display apparatus and controlling method thereof | |
CN103440033B (en) | A kind of method and apparatus realizing man-machine interaction based on free-hand and monocular cam | |
WO2011053036A2 (en) | Method, terminal, and computer-readable recording medium for trimming a piece of image content | |
WO2015050322A1 (en) | Method by which eyeglass-type display device recognizes and inputs movement | |
CN105005386B (en) | Method for regulating screen display direction and terminal | |
EP3617851B1 (en) | Information processing device, information processing method, and recording medium | |
CN109948450A (en) | A kind of user behavior detection method, device and storage medium based on image | |
CN103324274A (en) | Method and device for man-machine interaction | |
CN106775666A (en) | A kind of application icon display methods and terminal | |
CN107622497A (en) | Image cropping method, apparatus, computer-readable recording medium and computer equipment | |
CN107454304A (en) | A kind of terminal control method, control device and terminal | |
CN111783600A (en) | Face recognition model training method, device, equipment and medium | |
WO2016036197A1 (en) | Hand movement recognizing device and method | |
CN104571513A (en) | Method and system for simulating touch instructions by shielding camera shooting area | |
US20180260031A1 (en) | Method for controlling distribution of multiple sub-screens and device using the same | |
CN104615984B (en) | Gesture identification method based on user task |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20150429 |
|
RJ01 | Rejection of invention patent application after publication |