CN103824282A - Touch and motion detection using surface map, object shadow and a single camera - Google Patents

Touch and motion detection using surface map, object shadow and a single camera Download PDF

Info

Publication number
CN103824282A
CN103824282A CN201410009366.5A CN201410009366A CN103824282A CN 103824282 A CN103824282 A CN 103824282A CN 201410009366 A CN201410009366 A CN 201410009366A CN 103824282 A CN103824282 A CN 103824282A
Authority
CN
China
Prior art keywords
camera
reference field
projector
light pattern
shade
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410009366.5A
Other languages
Chinese (zh)
Other versions
CN103824282B (en
Inventor
张玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hong Kong Applied Science and Technology Research Institute ASTRI
Original Assignee
Hong Kong Applied Science and Technology Research Institute ASTRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/102,506 external-priority patent/US9429417B2/en
Application filed by Hong Kong Applied Science and Technology Research Institute ASTRI filed Critical Hong Kong Applied Science and Technology Research Institute ASTRI
Publication of CN103824282A publication Critical patent/CN103824282A/en
Application granted granted Critical
Publication of CN103824282B publication Critical patent/CN103824282B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)

Abstract

The present invention provides an optical method and a system for obtaining positional and/or motional information of an object with respect to a reference surface, including detecting if the object touches the reference surface, by using a projector and one camera. A surface map is used for mapping a location on the reference surface and a corresponding location in a camera-captured image having a view of the reference surface. In particular, a camera-observed shadow length, i.e. a length of the object's shadow observable by the camera, estimated by using the surface map, is used to compute the object's height above the reference surface (a Z coordinate). Whether or not the object touches the reference surface is also obtainable. After an XY coordinate is estimated, a 3D coordinate of the object is obtained. By computing a time sequence of 3D coordinates, the motional information, such as velocity and acceleration, is obtainable.

Description

Use touch and the motion detection of surface mapping figure, shadow of object and a camera
[relevant cross reference]
The application is U.S. Patent application 13/474, and the 567(applying date is on May 17th, 2012) cip application, its disclosure is all incorporated to it herein by reference at this.
[technical field]
The present invention relates to optically determine position or the movable information of an object about a reference field, comprise whether this object touches the information of this reference field.The present invention be more particularly directed to a kind of method and system, it uses a surface mapping figure (surface map) for shining upon physical location and its relevant position in image shot by camera on described reference field, and in conjunction with measure the length of project objects from an image shot by camera, thereby optically determine position or the movable information of described object.
[background technology]
Computer based detects positional information (as volume coordinate) or the movable information (as speed, an acceleration) whether object touches a reference field and/or determine this object automatically, and this has considerable application in human-computer interaction amusement and consumer electronics field.For example one of them such application is, an interactive optical projection system provides a display, for user interaction, just need to determine whether user's finger tip touches a predeterminable area of this screen, so that this interactive optical projection system can receive user's input.Another such application relates to computer entertainment, and the speed of a game user finger tapping screen predicts that this user provides one to input to this game firmly or vacillatingly.
In order to determine position or the movable information of object, comprise whether this object touches reference field, that just need to obtain the height and position of this object on reference field by optical technology.In Chinese patent application publication number Isosorbide-5-Nitrae 77,478, a kind of device is disclosed, it detects the height of pointing on touch-surface and determines whether finger touches touch-surface.The device of the disclosure used two image inductors (such as two cameras) for detection of.Use two cameras rather than a camera, have the shortcoming of practical application in product manufacture, as higher cost, product also needs more spaces to remove to hold two cameras.
Only using a camera is desirable.The shade size of the shadow image that in this case, the height and position of object on reference field can be taken from this camera is calculated.But Fig. 1 provides two example explanations, in some cases, has the object of differing heights can produce substantially the same shade size on reference field.Example 1 is the arrangement of the projector-camera of system in U.S. Patent Application Publication No. 2007/201,863, for determining whether finger touches surface.In example 1, vertically throw light is to reference field 117 for projector 110, and camera 115 is taken projection 130 in the place of the described vertical direction the preceding paragraph distance of skew simultaneously, for calculating the height and position of object on reference field 117.Projection 130 can be that the first object 120a produces, and can be also that the second object 120b produces, and two objects have different height and positions on reference field 117.Example 2 is about U.S. Patent application 6,177, and the arrangement of projector-camera of 682, for determining the height that is arranged on the object on reference field.In example 2, the camera 165 on top is taken the image of projection 180, and projection 180 by projector 160 agley throw light to reference field 167 and produce.Same, projection 180 can be produced on differing heights position by different objects 170a, 170b, 170c on reference field 167.Particularly, object 170c has touched reference field 167, and another two object 170a, 170b are not.So the height and position about object on reference field, can not obtain unique solution.
Therefore need a kind of method, only use a camera just can determine and estimate the height of object on reference field.And can also obtain position or the movable information of object.
[summary of the invention]
The invention provides a kind of optical means, for obtaining position or the movable information of object about a reference field, comprise whether detect this object touches described reference field.Reference field can be smooth or uneven.Object has a predetermined reference edge point.A projector and a camera are for this optical means.The layout of projector and camera to make when do not touch the object of reference field be projected instrument irradiate time, this object is formed on a part for the shade of reference field and can be observed by camera along topographical surface line, the length of above-mentioned part shade is to can be used for the height of unique definite object on described reference field.Topographical surface line is by connecting camera and being mapped on reference field and forming with reference to the light path of marginal point.Particularly, use a surface mapping figure, a relevant position on position of mapping on reference field and image shot by camera, wherein image shot by camera has the picture of reference field.The method comprises: when initialization, obtain the surface profile of described surface mapping figure and a reference field, surface profile provides the height of reference field to distribute; Then detect object and whether occur, occur until identified object; In a moment of identifying after object occurs, enable position-information acquisition program, this process can produce one or more positional informations and comprise whether object has touched reference field, the object height on reference field, the three-dimensional of object (3D) coordinate, conventionally this process repeated in multiple moment, produce thus a seasonal effect in time series object dimensional coordinate, as a movable information.This seasonal effect in time series three-dimensional coordinate can be used for generating time history, the time history of acceleration and the time history of direct of travel of speed that other movable informations comprise object, acceleration, direct of travel, speed.
In the time of definite surface mapping figure and surface profile, light pattern of projector projects is to reference field.Then image shot by camera, this image is included in the picture of the light pattern on reference field.From captured image, calculate and obtain surface mapping figure, surface mapping figure is for being mapped to a respective physical position on reference field light pattern by the arbitrfary point on image shot by camera.Thereby described light pattern and corresponding photographic images are determined surface profile.
In the time that detection may have object to occur, use appearance-test pattern of being taken by camera.Repeat to take appearance-test pattern, occur until identified object.
Described position-information acquisition process is described below.First on reference field determine a region-of-interest (ROI), make ROI be one around with the region that comprises reference edge point.After ROI is determined, irradiate optically focused to one and at least cover on the region of ROI, make object around reference edge point illuminated and be mapped on reference field.Spotlight can be from projector generation or from an independently light source generation.Then camera is taken ROI-and is highlighted image.Highlight image from ROI-, by using surface mapping figure to estimate shade length and the shade-projector distance that camera is seen.If the shade length that camera is seen is found substantially close to zero, determines that this object touches reference field, thereby primary importance information is provided.
The height of object on reference field, as second place information, can estimate according to one group of data, these group data comprise the surface profile of reference field, shade length that camera is seen, shade-projector distance, the distance from projector to reference field of measuring along the distance between projector and the camera of datum-plane orientation measurement, along benchmark vertical direction, the distance from camera to reference field of measuring along benchmark vertical direction.If reference field is smooth, the height of object on reference field can calculate according to the equation in embodiment (4) so.
The height of object on reference field is the Z coordinate that forms this object dimensional coordinate.Y coordinate can obtain according to the distance between projector and the reference edge point measured in datum-plane direction.This distance can be calculated by the equation in embodiment (3) or equation (5).X coordinate can directly obtain from image shot by camera and surface mapping figure.Image shot by camera preferably ROI-highlights image.Therefore, the three-dimensional coordinate of acquisition provides the 3rd positional information.
Projector can use infrared light to carry out projection, or uses independently infrared light supply, makes camera also be configured at least sensing infrared light and carrys out photographic images.
Alternatively, arrange that projector and camera make: in the time having object to occur, in datum-plane direction, projector is between camera and object; In benchmark vertical direction, camera is between projector and reference field.
Another selection is, reflect by projector projects to any image on reference field with catoptron, and any picture reflecting on present reference field taken for camera.
Other side of the present invention is open by following examples.
[accompanying drawing explanation]
Fig. 1 provides two examples, illustrates that the different objects of differing heights on a reference field can produce the shade that size is identical, so only use the size information of shade, the height for object on reference field can not have a unique solution.
Fig. 2 a shows a model that object casts a shadow on a reference field, and this model is for expansion explanation of the present invention, if the shade length that camera is seen is used to calculate the height of object, this model has unique solution.
Fig. 2 b shows a similar model, but has a square on reference field He below reference edge point, and the effect that simulation reference field is elevated at object lower position, even if illustrate that reference field is non-flat forms, also can obtain the unique solution of object height.
Fig. 2 c shows a model that is similar to Fig. 2 a, but has used a catoptron to project the image on reference field for episcopic projector, and the picture occurring on reflection reference field, takes for camera.
Fig. 3 is the exemplary embodiments of the present invention flow chart of steps of allocation and movable information really.
Fig. 4 is definite surface mapping figure of one embodiment of the invention and the flow chart of steps of surface profile.
Fig. 5 is the flow chart of steps that the detection object of one embodiment of the invention occurs.
Fig. 6 is the flow chart of steps of position-information acquisition program of one embodiment of the invention.
Fig. 7 shows the example of a light pattern.
Fig. 8 shows that a ROI-highlights the example of image, shows that the shade seen of camera is by an object (finger) generation, obtains thus shade length that camera sees for estimating the height of object on reference field.
[embodiment]
As used herein, " a benchmark vertical direction " and " a datum-plane direction " is to be defined as two orthogonal orthogonal directions, but this both direction is not to define about gravity direction.Suppose that reference field is flat, benchmark vertical direction is just defined as the direction perpendicular to described reference field so, and datum-plane direction just defines about described benchmark vertical direction.For example reference field can be a floor surface or a metope.If reference field is not flat, replace original reference field for definition datum vertical direction by an imaginary plane that can represent reference field so.Namely, if reference field is not flat, benchmark vertical direction is just defined as the direction perpendicular to this imaginary plane at this moment so.
" height of object on reference field " that in this instructions and claims, use is defined as, and a predetermined reference edge from object of measuring along benchmark vertical direction is put the distance of reference field.An example of reference edge point is finger tip, and object is finger.Another example of reference edge point is nib, and pen is object.
In addition, as used in this instructions and claims, " occurring an object " refers to that object appears in the visual visual field of camera (field of view).Similarly, " there is no object " and refer to that object does not appear in the above-mentioned visual field.
a. mathematics launches
Fig. 2 a shows a model that object casts a shadow on a reference field.This model is explained for expansion of the present invention.Define a benchmark vertical direction 202 and a datum-plane direction 204 about reference field 230.Projector 210 is irradiated to an object 220, and this object has a predetermined reference edge point 225.The light of projector 210 is stopped by object 220, produces a shade 241a on reference field 230.Particularly, the light of projector 210 is advanced along sight line path (line-of-sightpath) 250, encounters reference edge point 225, therefore produces the starting point 242a of shade 241a.Camera 215 is for absorbing the object 220 in the visual field and a part of shade 241a that can be observed by camera 215.This part shade 241a that can be observed by camera 215 forms along topographical surface line (topographical surface line) 235, and described topographical surface line is to be mapped on described reference field and to be formed by the light path 255 that connects camera 215 and reference edge point 225.The part shade 241a that camera be can't see is stopped by object 220 and is not absorbed by camera 215.
This part shade 241a that can be seen by camera 215 has a shade length 240a that can be observed by camera, is expressed as S.With H frepresent the height of object 220 on reference field 230.With L frepresent the distance in the datum-plane direction 204 between camera 215 and reference edge point 225.Represent shade-projector distance with D, i.e. distance in datum-plane direction 204 between projector 210 and shade 241a starting point 242a.With L prepresent the distance in the datum-plane direction 204 between projector 210 cameras 215.With H prepresent that projector 210 is to the distance in the benchmark vertical direction 202 of reference field 230.With H crepresent that camera 215 is to the distance in the benchmark vertical direction 202 of reference field 230.Because formed two similar triangles that its limit overlaps with light path 250, obtain equation (1):
D + L p - L f D = H f H p - - - ( 1 )
In addition, also form two its limits and the similar triangles that light path 255 overlaps, obtained equation (2):
S + D + L p - L f S + D + L p = H f H c - - - ( 2 )
According to equation (1), with H fexpress L f, obtain
L f = D + L p - D H f H p - - - ( 3 )
Equation (3) substitution equation (2), again through algebraic operation, is obtained
H f = H c H p S ( S + D + L p ) H p - H c D - - - ( 4 )
The H drawing thus fbe the height of object 220 on reference field 230, the shade length 240a that can be seen by S(camera) and D(shade-projector distance) well-determined, and these two parameters can obtain by camera pickup image and a surface mapping figure, this will explain in the back.Other parameters in equation (4), i.e. L p, H pand H c, can in the time that camera 215 and projector 210 are set, just obtain.
According to equation (4), if clearly S=0, so H f=0.Therefore, if the shade length 240a that camera is seen is almost 0 o'clock, if or the part shade 241a that can be observed by camera 215 just do not exist, so just can determine that object has touched reference field 230.
Another one result, by the H of equation (4) fcalculated value, can obtain L from equation (3) f, or directly calculated by equation (5):
L f = D + L p - H c DS ( S + D + L p ) H p - H c D - - - ( 5 )
The Y coordinate of object can be by L fobtain.The X coordinate of object can obtain from image shot by camera and surface mapping figure, and this describes in detail in the back.Calculate H by equation (4) again f, so just can obtain the three-dimensional coordinate of object.
Fig. 2 a shows that the position of camera in benchmark vertical direction 202 will be lower than projector 210, more farther apart from object 220 than projector 210 in datum-plane direction 204.But the present invention does not limit to the position configuration of camera 215 and projector 210.In benchmark vertical direction 202, projector 210 can be lower than camera 215, the path 255 as long as projector 210 does not obstruct the view.Similarly, in datum-plane direction 204 camera 215 can than projector 210 from object 220 more close to, as long as camera 215 path 250 that do not obstruct the view.
The model class that Fig. 2 b shows, like Fig. 2 a, is H except there being a height below the reference edge point 225 at object 220 orectangle square 260.Introducing square 260 is equivalent to raise H with reference to the reference field 230 under marginal point 225 o.This can form the shade 241b of a distortion, and it has the starting point 242b of a skew.The shade length 240b that this can cause the camera of a lengthening to see, S ', and shade-projector distance B of a shortening '.Relation between them is as follows:
D ′ D = H p - H o H p - - - ( 6 )
And
S=S′ +D′-D (7)
So, H fremain well-determined.This presentation of results, even if reference field 230 is not smooth, the height distribution (can be regarded as the surface profile (surface profile) of reference field 230) of reference field 230 also can make the height of object on reference field 230 be determined by unique.Those of ordinary skills can easily make suitable modification to equation (4), to determine the height of object on a non-flat forms reference field that has a surface profile.
One of Fig. 2 c demonstration and Fig. 2 a model have the model of identical function effect.With a catoptron 280 carry out episcopic projector 270 and project to the image on reference field 230, and reflect the image occurring on reference field 230 and take to camera 275.Because there has been catoptron 280, projector 270 and camera 275 have just had respectively a virtual projection instrument 271 and a virtual camera 276.Virtual projection instrument 271 and virtual camera 276 respectively with Fig. 2 a model in projector 210 and camera 215 have identical functional effect.
b. the present invention
The invention provides an optical system, it comprises a projector and a camera, in order to obtain position or the movable information of object about reference field.Special benefit of the present invention is only to use a camera.Object has a predetermined reference edge point.Positional information comprises whether this object touches reference field, the object height on reference field and the three-dimensional coordinate of this object.Movable information comprises the sequential (time sequence) of three-dimensional coordinate.Movable information also comprises speed, acceleration, direction of motion and the time history thereof (time history) of object in addition.Position and movable information be about reference field, and the meaning is that the three-dimensional coordinate of object is using reference field as XY plane is as coordinate system, if reference field is smooth.If reference field is non-flat forms, those of ordinary skills can adjust coordinate system according to the surface profile of reference field.
Fig. 3 is the main method flow chart of steps of exemplary embodiments of the present invention.At this method first step 310, first obtain surface profile and a surface mapping figure (surface map) of reference field.As mentioned above, surface profile is to distribute (height distribution) as feature take the height of reference field.Surface mapping figure is mapped to any point on image shot by camera (or arbitrary pixel) on a respective physical position on reference field.By using surface mapping figure, point or pixel that on photographic images, certain is concerned can be mapped to relevant position on reference field.Step 310 is carried out conventionally in the time of system initialization.In step 320, system detects whether object occurs, occurs until define object.The appearance of object triggers carries out next step 330.In step 330, executing location-information acquisition program at once after defining object and occurring, this program produces one or more positional informations.According to the one or more positional information, can calculate movable information in step 340.The calculating movable information of position-information acquisition procedure and step 340 of step 330 often repeatedly, performs step 330,340 in multiple time.In general, the selection of time is the hardware constraints based on certain, as needs the frame per second of matching pursuit instrument and camera.
In addition, it is good that the feature of this method is also the position arrangement of projector and camera, make when do not touch the object of reference field be projected instrument irradiate time, the part shadow of object on reference field can be seen by camera.Particularly, shade forms along topographical surface line, and described topographical surface line is to be mapped on described reference field and to be formed by the linear light path that connects camera and reference edge point.The length of above-mentioned part shade is for determining the single-height of object on reference field in position-information acquisition program.This length is called the shade length that camera is seen.
Surface mapping figure and surface profile can be determined by the disclosed technology of U.S. Patent application 13/474,567.Fig. 4 shows one embodiment of the invention, and it determines surface mapping figure and the surface profile of reference field according to above-mentioned technology.First projector projects a light pattern (step 410) to reference field.When practical application, light pattern is usually designed to a regular pattern, as structured grid, regular grid, rectangular node.Fig. 7 shows a rectangular node.Realize above-mentioned light pattern with rectangular node 710 as an example now.Rectangular node 710 has multiple point of crossing 721-728.These point of crossing 721-728 can identify and find easily in image shot by camera, has the reference field of rectangular node 710 as long as image shot by camera comprises projection.On the each point of crossing 721-728 identifying in image shot by camera and light pattern, respective point has a man-to-man corresponding relation.As long as known the height of projector on reference field and the crevice projection angle of projector, just can obtain projecting to the physical location (as XY coordinate) of each point of crossing 721-728 of the light pattern on reference field.Therefore can calculate and construct surface mapping figure.For non-flat forms reference field, in the time that rectangular node 710 projects on reference field, the limit of rectangular node 710 can be out of shape and/or disconnect.The deviation of the corresponding edge on these limits and an imaginary flat surfaces of seeing by analysis camera, can estimate surface profile.Determine that surface mapping figure and surface profile can be summarized as follows: camera is taken the light pattern (step 420) projecting on reference field; Then according to the matching relationship of this group between respective point on one group of point on light pattern and photographic images, calculate surface mapping figure (step 430); Obtaining after surface mapping figure, can also determine surface profile (step 440) according to light pattern and photographic images.
In one embodiment, determine whether that it is appearance-test (test-for-presence) image of taking according to camera and definite that object occurs: when second light pattern of projector projects is on reference field time, an appearance-test pattern of camera shooting.Fig. 5 is the process flow diagram that the detection object of the present embodiment occurs.Second light pattern of projector projects (step 510) to reference field.Described second light pattern can be that light pattern for determining surface mapping figure and surface profile, can be also to obtain being applicable to after surface profile or optimizing another light pattern for reference field.Another described light pattern is relatively applicable to non-flat forms reference field.Then camera is taken an appearance-test pattern (step 520).According to appearance-test pattern, there are two kinds of methods to detect object and whether occur.The first object detecting method is that the comparison chart of appearance-test pattern and second light pattern is looked like to compare (step 530).Camera is taken second light pattern on the reference field that there is no object appearance, just obtains described comparison chart picture.Or, also can calculate described comparison chart picture according to surface mapping figure from second light pattern.Relatively a method of appearance-test pattern and comparison chart picture is the error image calculating between appearance-test pattern and comparison chart picture.If do not have object to occur, appearance-test pattern with comparison chart as substantially the same, so most of pixel value of error image all approaches zero, or lower than certain threshold value.If there is object to occur, in error image, just have so partial continuous region, its pixel value is greater than threshold value, just can define object and occur.Utilize error image to detect object and whether occur there is individual benefit: owing to calculating concurrency, the calculating of error image is very fast, and the frequency therefore detecting can be accomplished very high.The second object detecting method is that second light pattern and the reconstruction light pattern obtaining from appearance-test pattern are compared to (step 535).Rebuild light pattern and rebuild according to surface mapping figure from appearance-test pattern, if therefore do not have object to occur, rebuilding light pattern should be substantially the same with second light pattern.Be similar to above-mentioned the first object detecting method, can calculate second light pattern and rebuild the error image between light pattern, whether occur to detect object.Repeating step 520 and 530(or 535), there is (step 540 of making decision) until identified object.
Fig. 6 is the process flow diagram of position-information acquisition program of one embodiment of the invention.
At the first step 610 of position-information acquisition program, determine the region-of-interest (ROI) on reference field.ROI is around a region living and comprise reference edge point.Define after object appearance in step 320, the ROI in step 610 determines and preferably determines according to a upper appearance-test pattern of step 320 li acquisition.On a upper appearance-test pattern, identify a pattern, this pattern and reference edge point object features is around matched, just determined ROI.If continue to trace into object, the ROI of current time is definite so just can simplify according to a upper definite ROI, because determine that reference edge point is whether also in a upper definite ROI, or predict current ROI position according to the movement locus of a upper reference edge point, easier than carry out pattern identification on a large scale at one.
After determining ROI, optically focused (spot light) is irradiated to the region that at least covers ROI, reference edge point object is around illuminated, and on reference field, forms a shade, unless object is very near reference field (step 620).Optically focused is preferably produced by projector.But optically focused also can be by a light source rather than produced by projector independently.As mentioned above, this part shade that can be seen by camera forms along topographical surface line, and this topographical surface line is to be mapped on described reference field and to be formed by the light path that connects camera and reference edge point.
Then camera is taken ROI-and is highlighted (ROI-highlighted) image (step 625).The example that ROI-highlights image is shown in Fig. 8.As shown in Figure 8, object is finger 830, and reference edge point is the finger tip 835 of finger 830.Optically focused 820 projects on reference field 810, forms a shade 840.
As mentioned above, obtain ROI-and highlight after image, the shade length (step 630) that computing camera is seen.As mentioned above, the shade length that camera is seen is the part shade length that can be seen by camera.Highlight shade length that image, computing camera is seen by realizing with surface mapping figure from ROI-.
If the shade length that camera is seen substantially close to zero, can judge that this object touches reference field (step 640).Therefore, it provides primary importance information.In fact, if the shade length that this camera is seen is less than certain threshold value, or the shadow of object part that can be seen by camera is undetectable, and the shade length that camera is seen so is just considered to substantially approach zero.In some practical application, this is just enough to confirm that this object touches reference field.In one embodiment, if confirm that object touches reference field, just stop described position-information acquisition program, start next step.
If still continue position-information acquisition program after step 640 obtains described primary importance information, so can be by using surface mapping figure to highlight image and calculate shade-projector apart from (step 650) from ROI-.As mentioned above, shade-projector distance is distance between the projector measured in datum-plane direction and shade starting point, and wherein starting point is that reference edge point is incident upon the specified point on shade.
Second positional information that this method provides is the height of object on reference field.After the shade length of seeing at acquisition camera and shade-projector distance, in step 660, according to one group of data, the distance (L between projector and the camera of comprise shade length (S) that surface profile, camera see, shade-projector distance (D), measuring in datum-plane direction p), along benchmark vertical direction measure the distance (H from projector to reference field p), along benchmark vertical direction measure the distance (H from camera to reference field c), calculate the height (H of object on reference field f).If reference field is smooth, the height of object on reference field can calculate according to equation (4) so.It should be noted that if the shade length that camera is seen is found to be and substantially approach zero, the height of object on reference field can directly be set to zero so.It is also noted that the height of object on reference field is the Z coordinate that forms this object dimensional coordinate.
In order to complete three-dimensional coordinate, the XY coordinate of object can obtain as follows.Y coordinate can be that in datum-plane direction, the distance between camera and reference edge point (is L f).Distance L fto calculate (step 670) by equation (3) or equation (5).The X coordinate of object can be directly from image shot by camera and surface mapping figure by determining that then the position of object this image shot by camera be mapped to this position the physical location of described object on surface mapping figure and obtain (step 675).The ROI-that image shot by camera preferably obtains in step 625 highlights image.Then the height (Z coordinate) on reference field according to XY coordinate and object, obtains the three-dimensional coordinate (step 680) of object, thereby the 3rd positional information is provided.
As mentioned above, position-information acquisition program repeats for several times, to obtain a seasonal effect in time series object dimensional coordinate.Thereby this time series provides the first movable information.According to this time series, can obtain one or more movable informations of object.The example of these one or more movable informations comprises time history, the time history of acceleration and the time history of direct of travel of the acceleration of speed, object of object and direct of travel thereof, speed.
In a system that realizes described method, projector can use visible ray or invisible light for projection.Application is depended in the selection of light.For example, one needs user on touch-screen, to press the interactive projection system that his or her finger is inputted, and for projector, preferably uses invisible light, preferably infrared light.Similarly, aspect generation optically focused, also can pass through independently light source and produce infrared light.When with projector or independently light source produce when infrared light, in the time of photographic images, camera will be configured to sensing infrared light so.
In the realization of this method, projector and camera are to arrange like this: make (I) object on reference field, produce a shade, and (ii) the visual field of camera preferably can cover the whole light pattern projecting on reference field.One select in, projector and camera arrangements are like this: (I) in the time that object occurs, in datum-plane direction projector between camera and object, and (ii) in benchmark vertical direction camera between projector and reference field.Also have a kind of selection, with catoptron reflect any by projector projects to the image on reference field, and reflect any picture appearing on reference field, taken by camera.
In some application that use this method, described light pattern and described the second light pattern are identical.If produce optically focused by light source independently, projector only needs projection light pattern so.In this case, one can to carry the light source that projects fixing light pattern be described projector implementation method cheaply.
For obtaining object about the position of reference field or a system for movable information, comprise a projector and a camera, and configuration-system itself is according to previously described embodiment of the method, to determine above-mentioned position or movable information.Alternatively, this system can integrated projector and camera in one, be embodied as an independently unit.Conventionally, can embed one or more processors in system, for carrying out calculating and the estimation steps of this method.
Method and system disclosed herein can for or as interactive projection system.In interactive projection system, object is user's finger, and reference edge point is the finger tip of finger.Thereby making detecting to the appearance of pointing with to the touch of reference field, this interactive projection system provides user's input information.
The present invention can have other concrete forms to embody and not depart from its spirit or its essential characteristic.Therefore, the present embodiment should be considered to illustrative and not restrictive in every respect.Scope of the present invention represents by claims rather than by description above, is therefore included in institute in claim equivalents and scope and changes and be all intended to be contained in wherein.

Claims (21)

1. for including the optical means of system for a projector and a camera, for obtaining positional information or the movable information of an object about a reference field, described object has a predetermined reference edge point, and the method comprises:
Obtain the surface profile of described reference field, and a surface mapping figure, affiliated surface mapping figure is for being mapped to a corresponding physical location on described reference field by the arbitrfary point on described image shot by camera;
Define again a moment after described object occurs, start position-information acquisition process; Settle described projector and described camera by a position configuration, make in the time that the described object that does not touch described reference field is irradiated by described projector, a part for the shadow of object forming on described reference field along topographical surface line can be seen by described camera, make the length of described part shade, be called the shade length that camera calibration arrives, can be for determining the single-height of described object on described reference field in described position-information acquisition process.
2. the method for claim 1, wherein said topographical surface line is that the light path that connects described camera and described reference edge point is mapped on described reference field and forms.
3. the method for claim 1, wherein said position-information acquisition process comprises:
A ROI-process decision chart picture of taking from described camera, determine a ROI(region-of-interest), make described ROI comprise at least one around with the region that comprises described reference edge point;
The focus irradiation to one of described projector or an arbitrary source generation is at least covered on the region of ROI, make described reference edge point object around be irradiated to and on described reference field, produce a shade, unless described object is very near described reference field;
By using described surface mapping figure, from a ROI-highlight image, estimate described camera calibration to shade length, wherein said ROI-highlights image and is taken by described camera producing after described optically focused;
If described camera calibration to shade length approach zero, so determine described object touch described reference field, primary importance information is provided thus.
4. method as claimed in claim 3, wherein said position-information acquisition process also comprises:
By using described surface mapping figure, highlight and image, estimate shade-projector distance from described ROI-;
Go out the height of described object on described reference field according to one group of data estimation, described group of data comprise described surface profile, described camera calibration to shade length, described shade-projector distance, described projector and the described camera in datum-plane direction, measured between distance, the distance from described projector to described reference field of measuring in benchmark vertical direction, the distance from described camera to described reference field of measuring in benchmark vertical direction, second place information is provided thus.
5. method as claimed in claim 4, the wherein said height of described object on described reference field that estimate comprises calculating:
H f = H c H p S ( S + D + L p ) H p - H c D
If described reference field is smooth, wherein Hf is the height of described object on described reference field, S be described camera calibration to the shade length arriving, D is described shade-projector distance, L pthe distance between described projector and the described camera of measuring in datum-plane direction, H pthe distance from described projector to described reference field of measuring in benchmark vertical direction, H cit is the distance from described camera to described reference field of measuring in benchmark vertical direction.
6. method as claimed in claim 4, wherein said position-information acquisition process also comprises:
According to following one of them, the distance between described projector and described reference edge point that estimation is measured in described datum-plane direction:
(a) described group of data; Or
(b) distance between the height of described object on described reference field, described shade-projector distance, described projector and the described camera measured in datum-plane direction, the distance from described projector to described reference field of measuring in benchmark vertical direction;
From an image shot by camera and described surface mapping figure, obtain the X coordinate of described object;
Distance between described projector and the described reference edge point of measuring according to the X coordinate of described object, in described datum-plane direction, the height of described object on described reference field, obtain the three-dimensional coordinate of described object, and therefore the 3rd positional information is provided.
7. method as claimed in claim 6, also comprises: in multiple moment, repeat described position-information acquisition process, to obtain the three-dimensional coordinate of object described in a seasonal effect in time series, therefore provide a movable information.
8. method as claimed in claim 7, also comprise: from described seasonal effect in time series three-dimensional coordinate, calculate one or more other movable informations of described object, comprise the time history of speed, acceleration, direct of travel, speed, the time history of acceleration, the time history of direct of travel.
9. the method for claim 1, wherein said acquisition surface profile and surface mapping figure comprise:
By light pattern of described projector projects to described reference field;
Described in taking in the time not having described object to occur by described camera, light pattern projects described benchmark
Image on face does not therefore have described object in described image shot by camera;
Determine described surface profile from described light pattern and described image shot by camera;
According to the coupling between one group of discernible respective point on one group of point and described image shot by camera on described light pattern, calculate described surface mapping figure.
10. method as claimed in claim 9, wherein said light pattern is structured grid, regular grid or rectangular node.
11. methods as claimed in claim 9, also comprise:
Determine whether described object occurs, occurred until identify described object, thereby start described position-information acquisition process in described time trigger;
Wherein:
The appearance of described object is by when described projector projects the second light pattern is to described reference field, appearance-test pattern of taking according to described camera and definite;
Described the second light pattern can be described light pattern, can be also applicable to or optimize another light pattern for described reference field.
12. methods as claimed in claim 11, the appearance of wherein said object is also according to definite to get off:
Calculate an error image between a comparison chart picture of described appearance-test pattern and described the second light pattern, wherein said comparison chart similarly is to calculate according to described surface mapping figure from described the second light pattern; Or
Calculate described the second light pattern and an error image of rebuilding between light pattern, wherein said reconstruction light pattern is rebuild according to described surface mapping figure from described appearance-test pattern, if do not have described object to occur, so described reconstruction light pattern should be similar to described the second light pattern.
13. the method for claim 1, wherein said position configuration is:
In the time that described object occurs, in a datum-plane direction, described projector is between described camera and described object;
In a benchmark vertical direction, described camera is between described projector and described reference field.
14. the method for claim 1, also comprise: reflect by described projector projects to all images on described reference field with a catoptron, and reflect all pictures on present described reference field, take for described camera.
15. methods as claimed in claim 3, wherein said definite ROI comprises: determine ROI by identify a pattern on described ROI-process decision chart picture, described in it, pattern and described reference edge point described object features around matches.
16. methods as claimed in claim 3, wherein said projector or described arbitrary source use infrared light, at least respond to infrared light when wherein said camera is configured in photographic images.
17. methods as claimed in claim 11, wherein said projector uses infrared light for image projection, or described arbitrary source is an infrared light supply, at least responds to infrared light when wherein said camera is configured in photographic images.
18. 1 kinds for obtaining object about the positional information of reference field or the system of movable information, described object has a predetermined reference edge point, wherein said system comprises a projector and a camera, is configured to use method as claimed in claim 1 and obtains described positional information or movable information.
19. systems as claimed in claim 18, wherein said object is finger, and described reference edge point is finger tip, and reference field will provide user's input information to described system described in described finger touches.
20. 1 kinds for obtaining object about the positional information of reference field or the system of movable information, described object has a predetermined reference edge point, wherein said system comprises a projector and a camera, is configured to use method as claimed in claim 3 and obtains described positional information or movable information.
21. 1 kinds for obtaining object about the positional information of reference field or the system of movable information, described object has a predetermined reference edge point, wherein said system comprises a projector and a camera, is configured to use method as claimed in claim 8 and obtains described positional information or movable information.
CN201410009366.5A 2013-12-11 2014-01-09 Touch and motion detection using surface mapping figure, shadow of object and camera Active CN103824282B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US14/102,506 2013-12-11
US14/102,506 US9429417B2 (en) 2012-05-17 2013-12-11 Touch and motion detection using surface map, object shadow and a single camera

Publications (2)

Publication Number Publication Date
CN103824282A true CN103824282A (en) 2014-05-28
CN103824282B CN103824282B (en) 2017-08-08

Family

ID=50759324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410009366.5A Active CN103824282B (en) 2013-12-11 2014-01-09 Touch and motion detection using surface mapping figure, shadow of object and camera

Country Status (1)

Country Link
CN (1) CN103824282B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106415460A (en) * 2016-07-12 2017-02-15 香港应用科技研究院有限公司 Wearable device with intelligent user input interface
CN106774846A (en) * 2016-11-24 2017-05-31 中国科学院深圳先进技术研究院 Alternative projection method and device
CN107943351A (en) * 2017-11-22 2018-04-20 苏州佳世达光电有限公司 Touch identifying system and method in perspective plane
CN108108475A (en) * 2018-01-03 2018-06-01 华南理工大学 A kind of Time Series Forecasting Methods that Boltzmann machine is limited based on depth
CN108279809A (en) * 2018-01-15 2018-07-13 歌尔科技有限公司 A kind of calibration method and device
CN108483035A (en) * 2018-03-23 2018-09-04 杭州景业智能科技有限公司 Divide brush all-in-one machine handgrip
CN109375833A (en) * 2018-09-03 2019-02-22 深圳先进技术研究院 A kind of generation method and equipment of touch command
CN110455201A (en) * 2019-08-13 2019-11-15 东南大学 Stalk plant height measurement method based on machine vision
CN110858404A (en) * 2018-08-22 2020-03-03 福州瑞芯微电子股份有限公司 Identification method based on regional offset and terminal
CN110941367A (en) * 2018-09-25 2020-03-31 福州瑞芯微电子股份有限公司 Identification method based on double photographing and terminal
CN111208479A (en) * 2020-01-15 2020-05-29 电子科技大学 Method for reducing false alarm probability in deep network detection
CN111727435A (en) * 2017-12-06 2020-09-29 伊利诺斯工具制品有限公司 Method for enlarging detection area of shadow-based video intrusion detection system
CN112560891A (en) * 2020-11-09 2021-03-26 联想(北京)有限公司 Feature detection method and device
CN114838675A (en) * 2017-10-06 2022-08-02 先进扫描仪公司 Generating one or more luminance edges to form a three-dimensional model of an object
CN111727435B (en) * 2017-12-06 2024-04-26 伊利诺斯工具制品有限公司 Method for increasing detection area of shadow-based video intrusion detection system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6177682B1 (en) * 1998-10-21 2001-01-23 Novacam Tyechnologies Inc. Inspection of ball grid arrays (BGA) by using shadow images of the solder balls
US20070201863A1 (en) * 2006-02-28 2007-08-30 Microsoft Corporation Compact interactive tabletop with projection-vision
CN101068606A (en) * 2004-12-03 2007-11-07 世嘉股份有限公司 Gaming machine
CN101571776A (en) * 2008-04-21 2009-11-04 株式会社理光 Electronics device having projector module
CN102779001A (en) * 2012-05-17 2012-11-14 香港应用科技研究院有限公司 Light pattern used for touch detection or gesture detection
CN103383731A (en) * 2013-07-08 2013-11-06 深圳先进技术研究院 Projection interactive method and system based on fingertip positioning and computing device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6177682B1 (en) * 1998-10-21 2001-01-23 Novacam Tyechnologies Inc. Inspection of ball grid arrays (BGA) by using shadow images of the solder balls
CN101068606A (en) * 2004-12-03 2007-11-07 世嘉股份有限公司 Gaming machine
US20070201863A1 (en) * 2006-02-28 2007-08-30 Microsoft Corporation Compact interactive tabletop with projection-vision
CN101571776A (en) * 2008-04-21 2009-11-04 株式会社理光 Electronics device having projector module
CN102779001A (en) * 2012-05-17 2012-11-14 香港应用科技研究院有限公司 Light pattern used for touch detection or gesture detection
CN103383731A (en) * 2013-07-08 2013-11-06 深圳先进技术研究院 Projection interactive method and system based on fingertip positioning and computing device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
PENG SONG ET AL.: "Vision-based Projected Tabletop Interface for Finger Interactions", 《IEEE INTERNATIONAL CONFERENCE ON HUMAN-COMPUTER INTERACTION (HCI 2007)》 *

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106415460B (en) * 2016-07-12 2019-04-09 香港应用科技研究院有限公司 Wearable device with intelligent subscriber input interface
CN106415460A (en) * 2016-07-12 2017-02-15 香港应用科技研究院有限公司 Wearable device with intelligent user input interface
CN106774846A (en) * 2016-11-24 2017-05-31 中国科学院深圳先进技术研究院 Alternative projection method and device
CN114838675A (en) * 2017-10-06 2022-08-02 先进扫描仪公司 Generating one or more luminance edges to form a three-dimensional model of an object
CN107943351A (en) * 2017-11-22 2018-04-20 苏州佳世达光电有限公司 Touch identifying system and method in perspective plane
CN111727435A (en) * 2017-12-06 2020-09-29 伊利诺斯工具制品有限公司 Method for enlarging detection area of shadow-based video intrusion detection system
CN111727435B (en) * 2017-12-06 2024-04-26 伊利诺斯工具制品有限公司 Method for increasing detection area of shadow-based video intrusion detection system
CN108108475A (en) * 2018-01-03 2018-06-01 华南理工大学 A kind of Time Series Forecasting Methods that Boltzmann machine is limited based on depth
CN108108475B (en) * 2018-01-03 2020-10-27 华南理工大学 Time sequence prediction method based on depth-limited Boltzmann machine
CN108279809A (en) * 2018-01-15 2018-07-13 歌尔科技有限公司 A kind of calibration method and device
CN108279809B (en) * 2018-01-15 2021-11-19 歌尔科技有限公司 Calibration method and device
CN108483035A (en) * 2018-03-23 2018-09-04 杭州景业智能科技有限公司 Divide brush all-in-one machine handgrip
CN110858404A (en) * 2018-08-22 2020-03-03 福州瑞芯微电子股份有限公司 Identification method based on regional offset and terminal
CN109375833A (en) * 2018-09-03 2019-02-22 深圳先进技术研究院 A kind of generation method and equipment of touch command
CN110941367A (en) * 2018-09-25 2020-03-31 福州瑞芯微电子股份有限公司 Identification method based on double photographing and terminal
CN110455201B (en) * 2019-08-13 2020-11-03 东南大学 Stalk crop height measuring method based on machine vision
CN110455201A (en) * 2019-08-13 2019-11-15 东南大学 Stalk plant height measurement method based on machine vision
CN111208479B (en) * 2020-01-15 2022-08-02 电子科技大学 Method for reducing false alarm probability in deep network detection
CN111208479A (en) * 2020-01-15 2020-05-29 电子科技大学 Method for reducing false alarm probability in deep network detection
CN112560891A (en) * 2020-11-09 2021-03-26 联想(北京)有限公司 Feature detection method and device

Also Published As

Publication number Publication date
CN103824282B (en) 2017-08-08

Similar Documents

Publication Publication Date Title
CN103824282A (en) Touch and motion detection using surface map, object shadow and a single camera
US11652965B2 (en) Method of and system for projecting digital information on a real object in a real environment
CN102508578B (en) Projection positioning device and method as well as interaction system and method
US9429417B2 (en) Touch and motion detection using surface map, object shadow and a single camera
US10607413B1 (en) Systems and methods of rerendering image hands to create a realistic grab experience in virtual reality/augmented reality environments
CN102799318B (en) A kind of man-machine interaction method based on binocular stereo vision and system
US9805509B2 (en) Method and system for constructing a virtual image anchored onto a real-world object
KR101002785B1 (en) Method and System for Spatial Interaction in Augmented Reality System
CN103477311A (en) Camera-based multi-touch interaction apparatus, system and method
US20140292648A1 (en) Information operation display system, display program, and display method
CN102622131B (en) Electronic equipment and positioning method
CN103677240A (en) Virtual touch interaction method and equipment
TWI484386B (en) Display with an optical sensor
KR101330531B1 (en) Method of virtual touch using 3D camera and apparatus thereof
CN109073363A (en) Pattern recognition device, image-recognizing method and image identification unit
US9857919B2 (en) Wearable device with intelligent user-input interface
JP2004272515A (en) Interface method, device, and program
KR20150062952A (en) Laser projector with position detecting capability and position detection method using the same
CN202443449U (en) Photographic multi-point touch system
Cheng et al. Fingertip-based interactive projector–camera system
US9551922B1 (en) Foreground analysis on parametric background surfaces
Bacim et al. Understanding touch selection accuracy on flat and hemispherical deformable surfaces
Zabulis et al. Augmented multitouch interaction upon a 2-DOF rotating disk
CN104238734A (en) three-dimensional interaction system and interaction sensing method thereof
Prima et al. A Pointing Device for 3D Interactive Spherical Displays

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant