EP3097511A1 - Method for detecting a movement path of at least one moving object within a detection region, method for detecting gestures while using such a detection method, and device for carrying out such a detection method - Google Patents
Method for detecting a movement path of at least one moving object within a detection region, method for detecting gestures while using such a detection method, and device for carrying out such a detection methodInfo
- Publication number
- EP3097511A1 EP3097511A1 EP15700309.6A EP15700309A EP3097511A1 EP 3097511 A1 EP3097511 A1 EP 3097511A1 EP 15700309 A EP15700309 A EP 15700309A EP 3097511 A1 EP3097511 A1 EP 3097511A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- detection
- movement
- detection area
- image
- pixel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Definitions
- the content of German Patent Application 10 2014 201 313.5 is incorporated herein by reference.
- the invention relates to a method for detecting a movement path of at least one moving object within a detection area. Furthermore, the invention relates to a method for gesture recognition using such a recognition method and an apparatus for carrying out such a recognition method or Gestikerken- recognition method.
- the invention described measures a distribution density of motion correspondences between parts of successive images.
- the essential information that is processed is a movement pattern, whereby the moving structure is only deduced in a next step.
- a pattern recognition for example a face recognition, which is performed on one and the same image is not required.
- the essential motion information is obtained by comparing successive images. For this purpose, motion correspondences are determined between image sections of two successive images. Correspondence is given when two parts of the picture are similar. In this preprocessing, correspondences are also permitted which do not correspond to the optical flow.
- distributions of correspondence vectors of different direction and length are produced for small image regions in each case, thus correspondence distribution profiles over the entire image. These correspondence distribution profiles are converted into a correspondence distribution density. The image flow then corresponds to the largest values of an ideal correspondence distribution density.
- the ideal correspondence distribution density of an optical flow ie a "clean" optical flow
- the preprocessing process can therefore be characterized as flow-oriented examination (flox), with which correspondence A subset of such correspondence distribution densities is the optical flux
- flox flow-oriented examination
- the distribution density will find a variety of other correspondences
- the distribution densities are checked for potential movements of compact regions
- Correspondences between similar pixels or image parts that are not images of the same object, eg correspondences between two adjacent file folders lead to a pseudo-movement that usually does not continue locally but remains local, comparable to the speedometer indicator on spinning tires, by comparing more than two capturing images taken in succession exclude such apparent movements.
- the concatenation of plausibilized motion increments then leads to a movement, which in turn is checked for a gesture.
- a suitable average of the coordinates of the common movement is used to represent the actual object.
- it is not the position of the object that determines the motion gesture, but the shape of the path, which in this case is identical for all common movements.
- the supreme of all pixels traversing a common path can also be selected and assigned. This is, for example, the fingertip of an upward pointing finger in the picture.
- the accuracy of the web has to be so good that it is possible to differentiate the web shapes assigned to the gestures.
- camera images can be cyclically loaded into an evaluation computer.
- the temporal distance of the pictures may vary, but must be known. From two successive images, a correspondence distribution density is determined from which movement increments are calculated per image pair. From the sequence of motion increments, motion sequences are filtered which can correspond to selected gesture movements. The number of incorrect correspondence distribution densities can be reduced by coarse distance knowledge, by suitable depth sensors or by sharpness adjustments of the camera or flash lighting, in order to increase the security with the recognition.
- no object shape detection When pixel mapping is done no object shape detection. It is checked where, in corresponding pixel groups or image areas, movements, in particular fast movements, with high density, ie movements of pixel groups with comparable movement increments, are detected. From a detected pixel group, the selection and assignment of a representative pixel takes place on the basis of previously defined criteria for the determined distribution density and the associated movement increments. For example, a minimum density of moving pixels can be specified and it can under the then preselected pixels which lie within the pixel groups with the minimum density, a selection is made after the largest movement increment. Alternatively, it is possible to preselect according to certain movement increments and, within a pixel group which has this movement increment, a pixel which is excellent in terms of its position can be selected within this pixel group.
- a prediction algorithm can simplify an assignment of a specific pixel. For this purpose, it is checked on the basis of, for example, three successive acquisition images, if the last captured acquisition image is a candidate pixel in an image area in which it can actually be expected according to its movement in the first two consecutively acquired acquisition images. Only pixels in which a predicted image area is reached then correspond to the prediction and thus fulfill this selection criterion. As far as several pixels remain after passing through these different selection criteria, a simple geometric selection can be made. For example, it is possible to select and assign an uppermost pixel which is present in the detection area among the selection candidates.
- the image areas may be individual pixels or pixel groups.
- the method steps are executed automatically and computer-aided.
- the procedure can be performed without operator intervention.
- the trajectory detection method can be run on a standard computer in real time.
- the trajectory recognition method also extracts movement increments from "dirty" flow distributions, in particular via a 2D frequency matrix, which will be described below.
- a depth range according to claim 2 can be carried out with the aid of a depth sensor.
- the depth of field of a front optics of the camera sensor can be used.
- Autofocus techniques can also be used for this purpose, which can be used in particular for contrast enhancement and thus for improving the result of a comparison of the acquisition images.
- the object speed can also be measured and specified for the object movement.
- the correspondence distribution density can be determined not only from objects in the distance of the expected object but also from objects less or further away from the sensor.
- coarse-resolution depth sensors based on structured light, time-of-flight or even stereoscopy, image parts can be identified that are not in the distance range and whose distribution densities are ignored.
- a depth sensor based on structured light is known, for example, from US Pat. No. 4,954,962.
- a depth sensor based on time-of-flight is known from EP 2 378 310 A1.
- Coarser resolutions offer, for example, ultrasonic sensors. Through a combination of some ultrasonic sensors, the directions of objects that are within the expected distance can be determined and other image areas can be discriminated.
- a depth-range defmition according to claim 3 provided the presence of an appropriately controllable light source, with high precision possible.
- a temporal variation of an illumination period at an exposure time in the imaging acquisition can also take place.
- IR filter placed in front of the camera and the surroundings are irradiated with limited IR light power, the range is limited and correspondence of underlying objects is no longer detected. If objects are very close, they are so strongly illuminated by the IR radiation that no contrasts are recognizable on them. This creates a depth range for measurable correspondences. If the IR radiation line and the exposure time are varied in a short time sequence, measurable depth ranges can be offset in such a way that only chains of movement increments can be made plausible by the objects that have remained throughout the measurable areas.
- Another distance-dependent effect is the depth of field.
- the depth of field is less than with low-foc lenses. Only in this area can correspondences be measured.
- the focal length in a short time sequence, the measurable depth range can be shifted so that only chains of Movement increments of the objects can be made plausible, which have remained throughout the measurable range.
- Gestures are created by the movement of body parts. Immediate measurement of motion does not require modeling, such as images of hands or joint models. If the movement of compact, for example, fist-sized, objects measured directly, can be dispensed with the modeling, for example, a hand pose or joint models. In a monocular camera system, the fist-sized object should tend to be moved transversely to the viewing direction of the sensor. Together with a suitable depth sensor, it is also possible to directly measure removal speeds to the sensor. In both cases, however, neither hand poses have to be trained or joint models with an essentially undisturbed environment must be taken into account.
- the derived gestures can be further plausibilized via the use of known methods such as inverse kinematics or template matching.
- the movement must have been triggered by a specific object-like grayscale distribution.
- fingers as well as artificial objects (gloves, markers) can serve as a basis.
- “Inverse Kinematics” movement predictions can be made and thus the correspondence density distribution can be evaluated in a more targeted manner.
- the correspondence density distribution can also be better evaluated through simplified, for example planar, motion models such as the model of constant speed.
- An inverse kinematics method is known from CA 2 21 1 858 C.
- a template matching method is known from EP 1 203 344 B 1 corresponding image acquisition, a circle symbol can be selected, which is generated by an open or closed hand of the user within a detection area by corresponding circular motion. About the imaging detection of such a circle symbol, a circle center and a circle radius of this circle symbol can be detected and stored, for example, in a memory of a control module. Subsequent symbols can then be detected as being relevant for the control, insofar as they occur within the circle area thus defined within the detection area, plus, if necessary, an additional surrounding area which can be preset via an enlarged tolerance radius around the center of the circle.
- Within the circular area can then be defined via the control various sub-areas, such as circular sectors, which are analogous to how keys of a keypad controlled by the user and can trigger various signals.
- a persistence in such a subrange or a defined change between predetermined subrange sequences can then be recognized as a signal for triggering a specific control sequence.
- Other gestures which can be recognized after the activation gesture "circle symbol" are, for example, a clockwise and counterclockwise rotating gesture, which can be processed, for example, to amplify or reduce a signal intensity comparable to, for example, a volume control.
- the gesture recognition method described here can also be used separately from the motion path recognition method explained above by using a corresponding control module and is an independent component of the application.
- a method known from the prior art may alternatively be used which deals with an optical flow, for example the so-called KLT tracker described in "Bruce D. Lucas and Takeo Kanade.” “Iterative Image Registration Technique with an Application to Stereo Vision.” IJCAI, pages 674-679, 1981.
- KLT tracker described in "Bruce D. Lucas and Takeo Kanade.” "Iterative Image Registration Technique with an Application to Stereo Vision.” IJCAI, pages 674-679, 1981.
- methods known in the context of codec implementations may be used.
- the Gestikerkennungsverfah- ren can be designed so that it runs on a standard computer in real time.
- Model pixel movements according to claim 5 result in a gesture set that can be used for a variety of control tasks.
- the specification of an input area with an area specification gesture according to claim 6 makes it possible to define a sub-area which can be detected, for example, with high resolution, within the detection area, which can be used for detailed input purposes.
- the Range Preset gesture may be a circular motion. You can then make further entries in the defined input area.
- Face recognition can identify a person in the environment of the movement. It can then be ensured that only certain people have access.
- the device may include a light source which is in signal communication with the camera sensor and / or the evaluation computer so that the light source, for example an exposure intensity or an exposure period, can be preset by the camera sensor and / or the evaluation computer by appropriate control.
- a light source which is in signal communication with the camera sensor and / or the evaluation computer so that the light source, for example an exposure intensity or an exposure period, can be preset by the camera sensor and / or the evaluation computer by appropriate control.
- an input field or a multiple input field can be used, for example, in a given input area.
- number of input fields for example in the form of a keyboard, can be generated by projection.
- the user can then trigger a defined control action or also make an input, for example a yes / no selection or a text input.
- FIG. 1 shows very schematically a device for carrying out a
- Figs. 2 and 3 are snapshots of the detection area reproducing detection images at two consecutive detection times.
- FIG. 1 shows schematically a device 1 for carrying out a detection method.
- a movement path 2 of at least one moving object 3 within a detection area 4, which is shown in dashed lines in FIG. 1 can be detected.
- the path of a moving hand of the object 3 is shown in FIG. 1 using the example of a gesticulating user.
- the device 1 has a monocular camera sensor 5, which is a high-resolution CCD camera or CMOS camera with an optical attachment 6, which is capable of a predetermined depth or a Depth range T of the detection area 4 with predetermined image sharpness to capture.
- a monocular camera sensor 5 which is a high-resolution CCD camera or CMOS camera with an optical attachment 6, which is capable of a predetermined depth or a Depth range T of the detection area 4 with predetermined image sharpness to capture.
- the camera sensor 5 is in signal connection with an evaluation computer 8.
- the latter is connected via a further signal line 9 with a device 10 to be controlled in signal connection.
- the evaluation computer 8 and the device 10 to be controlled can be one and the same unit.
- the device 10 to be controlled may be a type of tablet PC equipped with components 5 and 8 for gesture recognition.
- the device 10 to be controlled may also be an external device with respect to the evaluation computer 8, for example a TV set or another consumer electronics device.
- a home automation device, such as a lighting system or a shutter control or a heating system is an example of the device to be controlled 10th
- the detection area 4 is imaged by the camera sensor 5. In this case, an acquisition image reproducing the detection area 4 is generated in the camera sensor 5.
- the acquisition image 12 is generated by the camera sensor 5 by a delay period later than the acquisition image 1 1.
- the two acquisition images 1 1 and 12 are digitized in real time or quasi in real time and stored in the evaluation computer 8.
- the evaluation computer 8 a determination and evaluation of correspondences of image areas of the acquisition images 1 1, 12 then takes place.
- the acquisition images 1 1 and 12 in the evaluation computer 8 are compared with each other. It Then, a distribution density of image areas corresponding to their change in position in the acquisition image is determined.
- the delay period ie a time interval between the detection times of the acquisition images 11 and 12, can be variable.
- the delay period can be in the range between 10 ms and 1 s.
- image areas are exemplified by small squares 13 to 22. These image areas may be individual pixels or groups of pixels.
- the procedure is as follows, in particular using the evaluation computer 8: First, the first captured acquisition image 1 1 is split into overlapping image parts.
- the capture image 1 1 is a digital image that is formed overall as an A x B pixel array.
- the integer values A and B which represent the numbers of pixels in the respective rows and columns of the array, are in the range between 500 and 10,000, for example.
- the overlapping image parts are then C x D subpixel arrays.
- the integer value C is included is significantly smaller than the value A and the integer value D is significantly smaller than the value B.
- C and D may for example be in the range between 8 and 30.
- Adjacent image parts, ie adjacent subpixelarrays, have at least one pixel row or at least one pixel column in common.
- each of these image parts is assigned an image signature.
- this signature is a bit sequence which represents a brightness distribution and / or a color distribution within the image part.
- each image part is split into overlapping sub-image parts.
- the subpictures may be E x F sub-subpixel arrays.
- the integer values E and F are smaller than the values C and D of the subpixel arrays.
- E and F may be in the range of 3 to 7.
- a mean gray value is determined by appropriate evaluation of the brightness and / or color values of the associated pixels with the aid of the evaluation computer 8.
- a tolerance deviation ⁇ is specified.
- a difference is determined in each case between the determined average sub-image gray value and the average image part gray value. If the resulting difference is smaller than - ⁇ , the value 0 is assigned as the first sub-image signature value. If the difference lies between the values - ⁇ and ⁇ , the value 1 is assigned as the second sub-image signature value. If the difference is greater than + ⁇ , the value 2 is assigned as the third sub-image signature value.
- the partial image signature to be assigned to the respective image part is then the result of the assigned sub-image signature values. With the allocation method explained above, the respective image part signatures are determined for the two acquisition images 1 1 and 12. Subsequently, the image parts of the second capture image 12 are assigned to the image parts of the first capture image 1 1 with the same signature.
- 2D vectors which can be understood as raw motion increments.
- These 2D vectors connect image parts, that is to say, for example, the image regions 13 to 22 of the two capture images 11, 12 with the same image signature. Image parts without associated 2D vectors are then discarded, so that the further evaluation is limited exclusively to the assigned image parts.
- the 2D vectors in the environment in particular in a predefined pixel environment, are compared in each case of a remaining image part and the frequency of similar vectors in this environment is determined. The result of this frequency determination is the distribution density of the image areas corresponding to their positional change in the acquisition image.
- Motionless image parts have a vector length 0 in both dimensions and form a central element of the distribution density. Moving parts of the picture increase the frequency of discrete 2D vectors with a certain length and direction.
- the central element of the frequency distribution including 2D vectors with a length below a given limit length subsequently rejected.
- the camera is moving, it is alternatively possible to suppress 2D vectors which correspond to this movement within a predetermined tolerance range.
- a maximum frequency of a 2D vector swarm with calculation of center point and extent in the second acquisition image 12 is now selected. This may be the hand 24.
- the selection can then be continued for the next most frequent 2D vector swarm, ie for at least one subpopulation.
- One result of this subswath selection can be, for example, the raindrop 23.
- a linear prediction of the respective center of swarm in the next image for tracking this 2D vector swarm can then take place. This can improve the detection accuracy to suppress interference by swarms overlapping each other in individual detection images.
- FIG. 3 shows a typical (intermediate) result when evaluating the determined distribution density by a corresponding evaluation algorithm.
- the correspondence determination to the image areas 19 to 22 assigned to the hand 24 there have actually been True correspondences (movement of the image areas 21 and 22) and actually false correspondences (movement of the image areas 19 and 20) result.
- FIGS. 2 and 3 together with other image areas that can be assigned to the hand 24, which are not shown in FIGS. 2 and 3, there is an increased distribution density of image areas that correspond with the image areas 21 and 22 with respect to their positional change in the acquisition image 12.
- the result of the evaluation is an assignment of individual pixels from pixel groups evaluated with respect to their distribution density with associated motion increment between the acquisition images 1 1, 12 on the basis of the evaluated distribution density.
- the result of the evaluation of the acquisition images 1 1 and 12 results respectively assigned pixels for the objects "raindrops” and "hand” with the actual trajectories 2 23 for the raindrop 23 and 2 21 and 2 22 for the hand 24th
- the pixel movements assigned to the assigned pixels 13, 21, 22 and the associated movement increments 2 23 and 21 1, 2 22 can then be evaluated.
- determining the distribution density takes place - as explained above - detecting selected portions of the detection images 1 1, 12, which differ in the detection images 1 1, 12. In the region of the raindrop 23 and in the region of the hand 24, therefore, a higher-resolution determination and evaluation of correspondences of the image regions takes place.
- methods of averaging and statistical methods are used.
- the determination and evaluation of correspondences can, of course, be carried out on the basis of a sequence of individual images of a larger number, for example using a sequence of three, four, five, six, eight, ten, twenty-five, fifty, one hundred or even more individual images.
- the recognition method makes it possible to detect the trajectories of several independent objects. These can also be more than two independent objects (for example, three, four, five, ten, or even more independent objects).
- a predefined depth area T that is to say a range of predetermined distances, within which objects, that is to say, for example, the user 3, can be detected. len, be defined.
- a depth range for example, a distance range from the camera sensor 5 between 0.5 m and 3 m or between 1 m and 2.5 m can be specified. Also, a more tolerant or more specific specification of a depth range is possible.
- the definition of the predetermined depth range can be done by means of a depth sensor. This technique can be used, which are known under the keywords "Structured Light", "TOF".
- a stereo horrtaged Light "TOF”.
- a light field can also be used or ultrasound or radar radiation can be used.
- the depth of field of the optical attachment 6 can also be used to define the depth range T.
- autofocus techniques can be used. As soon as the depth of the detected object 3, ie its distance from the camera sensor 5, is known with the aid of such a method, it is also possible to measure and indicate a speed of the object detected in its movement after detection of the movement path 2.
- the definition of the depth range can also be achieved by setting a lighting intensity of an illumination of the detection area by means of a light source 25 at an exposure time during the imaging acquisition.
- the light source 25 is connected via a signal connection, not shown, with the camera sensor 5 and / or the evaluation computer 8 in signal connection.
- a temporal variation of an illumination period during illumination with the light source 25 in relation to the exposure time of the camera sensor 5 during the imaging acquisition can also be used to define the depth range.
- the above-described trajectory recognition method can be used within a method of gesture recognition.
- model pixel movements or model object movements are provided as control symbols, and these model pixel movements are compared with the pixel movements which were evaluated by the movement path recognition method. Subsequently, the model pixel movement is identified as a selected control symbol, which has the greatest agreement with the evaluated pixel movement. Finally, a control action associated with the selected control icon is performed.
- gesture recognition technique techniques known in the art as “template matching” and “inverse kinematics” may be used.
- the model pixel movements may include at least one of the following motion patterns:
- the control action may include predetermining an input area 26 within the entire detection area 4 by an area specification gesture.
- This range setting gesture may be performed, for example, by a circular motion of an open or closed hand.
- the person 3 can thereby define within the entire detection area 4 the input area 26, which is subsequently detected by the camera sensor 5 in high-resolution.
- the attachment optics 6 can be designed, for example, as a zoom lens.
- an input raster for example a keyboard layout
- the user can then operate a keyboard projected into the detection area 4 with the projector device 27, which in turn is detected, recognized and evaluated by the camera sensor 5.
- the gesture recognition and subsequent gesture control can in particular work without distinction from different trajectory models for symbol gestures. This will be explained below with reference to another example:
- the associated circle-symbol gesture then represents a "point to unlock" gesture
- All 2D vectors in a neighborhood of the second highest frequency of the vector distribution density describe a vector swarm, which can be calculated using the mean 2D vector lengths as well as a Mean value and a standard deviation of positions of the respective swarm vectors in the subsequent image
- the mean 2D vector lengths describe the movement increment
- the mean of the vector positions describes a center of the swarm
- the position standard deviations are a measure of the size of the swarm.
- the center of the detected circle trajectory is then detected by the gesture controller as a polar coordinate system in the acquisition image, having a center and a reference radius.
- This polar coordinate system is assigned by the gesture control eight sectors, which - as in the cartography - the cardinal directions N, NO, O, SO, S, SW, W and NW can be assigned.
- An outer boundary ring with a 1.5-fold reference radius is defined around the detected reference radius.
- the gesture control interprets this as deactivation of the gesture.
- this can be, for example, clockwise in rotation as an enlargement of an intensity signal desired by the operator and vice versa upon detection of a rotation of the swarm counterclockwise interpreted as a reduction of the desired intensity signal.
- a volume of a terminal to be operated via the gesture control can be controlled by corresponding rotational gestures.
- a specific signal can be triggered.
- a shift of the swarm into certain sectors can trigger associated signals. For example, by shifting the swarm to a particular signal and maintaining that position, a switching signal may be triggered. In this way, a control operation similar to that of a touchpad operation can be performed.
- the original, initializing circle-symbol gesture can therefore be used to define a type of keyboard in the room over which the user can trigger desired control signals.
- Each of the sectors discussed above may then represent a key of that keyboard.
- facial recognition may be performed prior to the comparison step, which is a prerequisite for performing the further steps of gesture recognition.
- a selection of the provided model pixel movements can take place.
- a profile of model pixel movements can be assigned to the user respectively recognized via the face recognition. So you can specify user profiles.
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
DE102014201313.5A DE102014201313A1 (en) | 2014-01-24 | 2014-01-24 | Method for detecting a movement path of at least one moving object within a detection area, method for gesture recognition using such a detection method, and device for carrying out such a detection method |
PCT/EP2015/050585 WO2015110331A1 (en) | 2014-01-24 | 2015-01-14 | Method for detecting a movement path of at least one moving object within a detection region, method for detecting gestures while using such a detection method, and device for carrying out such a detection method |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3097511A1 true EP3097511A1 (en) | 2016-11-30 |
Family
ID=52347334
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP15700309.6A Withdrawn EP3097511A1 (en) | 2014-01-24 | 2015-01-14 | Method for detecting a movement path of at least one moving object within a detection region, method for detecting gestures while using such a detection method, and device for carrying out such a detection method |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP3097511A1 (en) |
DE (1) | DE102014201313A1 (en) |
WO (1) | WO2015110331A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102016201704A1 (en) | 2016-02-04 | 2017-08-10 | Bayerische Motoren Werke Aktiengesellschaft | A gesture recognition apparatus and method for detecting a gesture of an occupant of a vehicle |
DE102017216065A1 (en) | 2017-09-12 | 2019-03-14 | Robert Bosch Gmbh | Method and device for evaluating pictures, operational assistance method and operating device |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4954962A (en) | 1988-09-06 | 1990-09-04 | Transitions Research Corporation | Visual navigation and obstacle avoidance structured light system |
US5889532A (en) | 1996-08-02 | 1999-03-30 | Avid Technology, Inc. | Control solutions for the resolution plane of inverse kinematic chains |
US6681034B1 (en) | 1999-07-15 | 2004-01-20 | Precise Biometrics | Method and system for fingerprint template matching |
US7120277B2 (en) * | 2001-05-17 | 2006-10-10 | Koninklijke Philips Electronics N.V. | Segmentation unit for and method of determining a second segment and image processing apparatus |
US9760214B2 (en) * | 2005-02-23 | 2017-09-12 | Zienon, Llc | Method and apparatus for data entry input |
JP5374220B2 (en) * | 2009-04-23 | 2013-12-25 | キヤノン株式会社 | Motion vector detection device, control method therefor, and imaging device |
EP2378310B1 (en) | 2010-04-15 | 2016-08-10 | Rockwell Automation Safety AG | Time of flight camera unit and optical surveillance system |
US20110299737A1 (en) * | 2010-06-04 | 2011-12-08 | Acer Incorporated | Vision-based hand movement recognition system and method thereof |
DE102011002577A1 (en) | 2011-01-12 | 2012-07-12 | 3Vi Gmbh | Remote control device for controlling a device based on a moving object and interface module for communication between modules of such a remote control device or between one of the modules and an external device |
JP2012253482A (en) * | 2011-06-01 | 2012-12-20 | Sony Corp | Image processing device, image processing method, recording medium, and program |
DE102011080702B3 (en) | 2011-08-09 | 2012-12-13 | 3Vi Gmbh | Object detection device for a vehicle, vehicle having such an object detection device |
US9625993B2 (en) * | 2012-01-11 | 2017-04-18 | Biosense Webster (Israel) Ltd. | Touch free operation of devices by use of depth sensors |
WO2013109609A2 (en) * | 2012-01-17 | 2013-07-25 | Leap Motion, Inc. | Enhanced contrast for object detection and characterization by optical imaging |
-
2014
- 2014-01-24 DE DE102014201313.5A patent/DE102014201313A1/en not_active Withdrawn
-
2015
- 2015-01-14 EP EP15700309.6A patent/EP3097511A1/en not_active Withdrawn
- 2015-01-14 WO PCT/EP2015/050585 patent/WO2015110331A1/en active Application Filing
Non-Patent Citations (2)
Title |
---|
None * |
See also references of WO2015110331A1 * |
Also Published As
Publication number | Publication date |
---|---|
DE102014201313A1 (en) | 2015-07-30 |
WO2015110331A1 (en) | 2015-07-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2344980B1 (en) | Device, method and computer program for detecting a gesture in an image, and said device, method and computer program for controlling a device | |
EP3642696B1 (en) | Method and device for detecting a user input on the basis of a gesture | |
EP2005361A1 (en) | Multi-sensorial hypothesis based object detector and object pursuer | |
DE102008016215A1 (en) | Information device operating unit | |
WO2009143542A2 (en) | Method for video analysis | |
DE102007013664A1 (en) | Tool e.g. blade, measuring and/or adjusting device, has rolling nut designed as roller ring transmission comprising set of roller-supported roller rings with variable upward gradient | |
DE102013217354A1 (en) | EDGE VIDEO TOOL AND INTERFACE WITH AUTOMATIC PARAMETERS ALTERNATIVES | |
WO2015110331A1 (en) | Method for detecting a movement path of at least one moving object within a detection region, method for detecting gestures while using such a detection method, and device for carrying out such a detection method | |
EP2887010A1 (en) | Method and device for three dimensional optical measurement of objects with a topometric measuring method and computer programme for same | |
DE102014106661B4 (en) | Switch operating device, mobile device and method for operating a switch by a non-tactile translation gesture | |
EP3642697B1 (en) | Method and device for detecting a user input on the basis of a gesture | |
DE102014224599A1 (en) | Method for operating an input device, input device | |
WO2020061605A2 (en) | Method for focusing a camera | |
DE10210926A1 (en) | Device for tracking at least one object in a scene | |
DE102013217347A1 (en) | USER INTERFACE FOR PARAMETER ADJUSTMENT FOR EDGE MEASUREMENT VIDEO TOOLS | |
EP3663800B1 (en) | Method for detecting objects using a 3d camera | |
DE102014224632A1 (en) | Method for operating an input device, input device | |
DE102008019795A1 (en) | Method for adapting an object model to a three-dimensional point cloud by correcting erroneous correspondences | |
WO2020043440A1 (en) | Directional estimation of an open space gesture | |
DE102019102423A1 (en) | Method for live annotation of sensor data | |
DE112019000857T5 (en) | Reference position setting method and apparatus for displaying a virtual image | |
DE102004050942B4 (en) | Bootstrap method for supervised teach-in of a pattern recognition system | |
EP3224955B1 (en) | Switch actuating device, mobile device, and method for actuating a switch by means of a non-tactile gesture | |
DE102022001208A1 (en) | Method for predicting trajectories of objects | |
DE102022207266A1 (en) | Apparatus and method for augmenting an image for self-supervised learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20160801 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20180503 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN |
|
18W | Application withdrawn |
Effective date: 20211209 |