DE102008060768A1 - Articulated object part e.g. pedestrian's hand forearm region, classifying method for use in driver assistance process of vehicle, involves transforming image recording such that internal degree of freedom is considered in standard views - Google Patents

Articulated object part e.g. pedestrian's hand forearm region, classifying method for use in driver assistance process of vehicle, involves transforming image recording such that internal degree of freedom is considered in standard views Download PDF

Info

Publication number
DE102008060768A1
DE102008060768A1 DE200810060768 DE102008060768A DE102008060768A1 DE 102008060768 A1 DE102008060768 A1 DE 102008060768A1 DE 200810060768 DE200810060768 DE 200810060768 DE 102008060768 A DE102008060768 A DE 102008060768A DE 102008060768 A1 DE102008060768 A1 DE 102008060768A1
Authority
DE
Germany
Prior art keywords
articulated
detected
freedom
image
normalized
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
DE200810060768
Other languages
German (de)
Inventor
Björn Dipl.-Ing.(FH) Barrois
Christian Dr.rer.nat. Wöhler
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Daimler AG
Original Assignee
Daimler AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Daimler AG filed Critical Daimler AG
Priority to DE200810060768 priority Critical patent/DE102008060768A1/en
Publication of DE102008060768A1 publication Critical patent/DE102008060768A1/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00335Recognising movements or behaviour, e.g. recognition of gestures, dynamic facial expressions; Lip-reading
    • G06K9/00355Recognition of hand or arm movements, e.g. recognition of deaf sign language

Abstract

The invention relates to a method for classifying detected articulated objects and / or parts of the articulated object based on at least one image acquisition, wherein a pose and internal degrees of freedom of the detected articulated object and / or part of the articulated object are determined, based on a Poseinformation of the detected articulated Object and / or part of the articulated object image contents of the at least one image recording are transformed so that at least the internal degrees of freedom are taken into account in a normalized view of the detected articulated object and / or part of the articulated object.

Description

  • The The invention relates to a method for classifying detected articulated objects and / or parts of the articulated object based on at least one image recording, wherein a pose of the detected articulated object and / or part of the articulated object is determined.
  • From the DE 101 26 375 B4 there is known a method and system for recognizing objects, in particular a biometric recognition method and system. In this case, a sequence of images of the object is recorded digitally and each image is converted into associated pixels, wherein at least two images of the object are taken from different perspectives. The pixels of the recorded images are transformed such that they can be displayed in a common coordinate system. The transformed pixels are superimposed in the common coordinate system to form an unstructured total point set that corresponds to the object or a normalized view of the object. Features of the object are extracted from the set of points using a function that is sampled in accordance with the unstructured set of points, using the extracted features to identify the object. The extracted features are classified into classes, which compares the extracted features with features determined in advance and stored in the classes.
  • In addition, from the DE 102 33 233 B4 a method for detecting a meaning content of a moving body part or for preparing a classification of a dynamic body part gesture known. In this case, a movement of the movable body part during a movement section between a first substantially stationary state and a second substantially stationary state is recorded by means of an image sensor to emit an image generator signal. In addition, a first number of times are determined, which have a substantially same time interval within the movement section. A ply, size and shape of the movable body part is held at each of the first number of times to obtain a first number of body pels. Furthermore, a limited first raster as a detection field for a first time and a subsequent second time is adapted to the position, size and shape of the movable body part such that the detection field substantially detects the entire movable body part at the first time and the second time.
  • Of the Invention is based on the object, a simplified method and an improved apparatus for classifying detected articulated objects and / or parts of the articulated object specify.
  • The Task is according to the invention in terms of Method by the in claim 1 and in terms of the device solved by the features specified in claim 5. advantageous Further developments of the invention are the subject of the dependent claims.
  • The inventive method for classification of detected articulated objects and / or parts of the articulated Object provides that by means of an image capture a pose information the detected articulated object and / or part of the articulated Object is determined. According to the invention Based on the determined pose information image content of at least transformed an image capture such that in a normalized view the detected articulated object and / or part of the articulated Object considered at least internal degrees of freedom become.
  • In Reference to the inventive method is Taking into account that the normalized view pose-independent, d. H. regardless of pose parameters, is determined by determined internal degrees of freedom and various Views of the articulated object and / or part of the articulated Object be eliminated.
  • Especially be articulated with articulated object and / or part of Object objects and / or parts called, not rigid, ie are not rigid. These include u. a. Humans, animals and / or Robot.
  • there become the internal degrees of freedom as well as different views the detected articulated object and / or part of the articulated object transformed in such a way that this when determining the normalized view be eliminated.
  • Especially become the pose information, such. B. a position of the detected articulated object and / or part of the articulated object, d. H. a three-dimensional position, by means of the prior art determined method known. These methods are based in particular on geometric 3D models and use features of a scene, such as For example, edges and / or 3D stereo points.
  • In particular, the method of classification can be used profitably in an industrial environment to classify human body parts as part of a detected articulated object, for example, to identify them. From ma schinellen parts of another articulated object, such. B. a robot to distinguish.
  • there In particular, the classification is made to special prefers to protect the health of a working person. For this For example, a hand-forearm region of humans becomes part of it the articulated object by means of an image pickup unit, for example a camera, detected.
  • Also the method according to the invention can be used for classification of detected articulated objects and / or parts of the articulated Object applied in a driver assistance method of a vehicle be used to detect pedestrians, for example and a driver, for example, before an impending Warning collision.
  • In Particularly advantageously, a classification approach is only from a classification task or from a respective classifier detected articulated object and / or part of the articulated Object dependent.
  • The inventive method is used in particular a verification of the detected by the image pickup unit articulated object and / or part of the articulated object.
  • embodiments The invention will be explained in more detail with reference to a drawing.
  • there shows:
  • 1 schematically a block diagram of an image recognition system for classifying detected articulated objects and / or parts of the articulated objects.
  • each other corresponding parts are in the figure with the same reference numerals Mistake.
  • In 1 is an image recognition system 1 shown. In the picture recognition system 1 image data of at least one image recording of a detected articulated object (not shown in greater detail) and / or a moving part of the articulated object are entered and a digitization stage 2 fed. In the digitization stage 2 the image acquisition is digitized and a pre-processing stage three fed.
  • In the preprocessing stage three On the basis of image data, a pose and / or internal degrees of freedom of the detected articulated object and / or part of the articulated object are determined on the basis of methods known from the prior art, which work in particular on the basis of a geometrical 3D model. In this case, features of a scene of the image recording or the image data, such as edges and / or 3D stereo points are used to determine a position, ie a three-dimensional position of the detected articulated object and / or part of the articulated object.
  • Under internal degrees of freedom within the meaning of the invention are in particular Crease angle that the detected articulated object and / or part of the articulated object, to understand.
  • Becomes for example, detecting a hand forearm region of a human in the image capture, thus, the hand forearm region has at least two internal degrees of freedom. In particular, the wrist has two internal degrees of freedom, because the hand on the wrist on the one hand laterally either left or right and then up or can move down.
  • The determined pose is subsequently used, for example, to cut out an image content of the image recording or the image data corresponding to the 3D model projected into an image. Subsequently, a picture section of a transformation stage thus generated is created 4 fed.
  • In the transformation stage 4 the image content is advantageously transformed. In this case, the transformation is carried out according to the invention such that the internal degrees of freedom of the detected articulated object and / or part of the articulated object are taken into account. In particular, a normalized view of the detected articulated object and / or part of the articulated object is determined based on the transformation.
  • Also be in the transformation stage 4 various views of the detected articulated object and / or part of the articulated object taken into account in an advantageous manner.
  • has the detected articulated object and / or part of the articulated Object in the image capture, for example, a pose on, in the at least one bending angle has an amount of> 0 °, this bending angle not particularly shown in the normalized view. Ie. that the detected articulated object and / or part of the articulated object in a so-called rest position shown in which are the bending angles, for example, of the wrist the detected hand forearm region is 0 °.
  • If the detected articulated object and / or part of the articulated object is a human hand forearm region, then so This is shown in the normalized view, ie after the transformation, such that, for example, left in the image of the forearm and right in the image, the hand is preferably displayed horizontally in a line, although the pose of detected by the image acquisition unit hand forearm region of the does not match.
  • With In other words, the normalized view becomes particularly advantageous Independent of the respective pose parameters of the detected articulated object and / or part of the articulated object determined.
  • After determining the normalized view of the detected articulated object and / or part of the articulated object, the normalized view becomes a classifier 5 fed. Based on the normalized view is using the classifier 5 a classification performed. This is the classifier 5 trained in particular with normalized views in order to distinguish normalized views of a variety of detected articulated objects and / or parts of the articulated object can.
  • For example should be based on the classification between a hand-forearm region of a human as a first part of a first articulated object and a moving part of a robot as a second part of a robot second articulated object can be distinguished. For this purpose is preferably a normalized view of the hand-forearm region of the People as well as a normalized view of the moving part of the Robotic deposited on the basis of which the classification performed becomes.
  • In a particularly advantageous manner, a classification task is simplified in a profitable manner by means of the method according to the invention, since always only a normalized view of the articulated object and / or part of the articulated object is assigned to the classifier 5 is provided for classification.
  • Especially preference is given to the method according to the invention a result of an articulated detected on an image capture Object and / or part of an articulated object on the basis of Classification verified.
  • 1
    Image recognition system
    2
    digitizing stage
    3
    preprocessing
    4
    transformation stage
    5
    classifier
  • QUOTES INCLUDE IN THE DESCRIPTION
  • This list The documents listed by the applicant have been automated generated and is solely for better information recorded by the reader. The list is not part of the German Patent or utility model application. The DPMA takes over no liability for any errors or omissions.
  • Cited patent literature
    • - DE 10126375 B4 [0002]
    • - DE 10233233 B4 [0003]

Claims (5)

  1. A method for classifying detected articulated objects and / or parts of the articulated object based on at least one image acquisition, wherein a pose and internal degrees of freedom of the detected articulated object and / or part of the articulated object are determined, characterized in that based on a pose information of the detected articulated object and / or part of the articulated object, image contents of the at least one image acquisition are transformed such that at least the internal degrees of freedom are taken into account in a normalized view of the detected articulated object and / or part of the articulated object.
  2. Method according to claim 1, characterized in that that the pose as well as the internal degrees of freedom based on a 3D model be determined
  3. Method according to claim 1 or 2, characterized in that the normalized view of the detected articulated object and / or part of the articulated object is determined by means of a classifier ( 5 ) is classified.
  4. Method according to one of claims 1 to 3, characterized in that the classifier ( 5 ) is trained with normalized views of the articulated object and / or part of the articulated object.
  5. Device for the classification of detected articulated objects and / or parts of the articulated object based on at least one generated by an image pickup unit Image capture, with a pose as well as internal degrees of freedom of the detected articulated object and / or part of the articulated object can be determined, characterized in that based on a pose information the detected articulated object and / or part of the articulated Object image content of at least one image capture such transformable are that articulated in a normalized view of the detected articulated Object and / or part of the articulated object at least the internal degrees of freedom are considered.
DE200810060768 2008-12-05 2008-12-05 Articulated object part e.g. pedestrian's hand forearm region, classifying method for use in driver assistance process of vehicle, involves transforming image recording such that internal degree of freedom is considered in standard views Withdrawn DE102008060768A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
DE200810060768 DE102008060768A1 (en) 2008-12-05 2008-12-05 Articulated object part e.g. pedestrian's hand forearm region, classifying method for use in driver assistance process of vehicle, involves transforming image recording such that internal degree of freedom is considered in standard views

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
DE200810060768 DE102008060768A1 (en) 2008-12-05 2008-12-05 Articulated object part e.g. pedestrian's hand forearm region, classifying method for use in driver assistance process of vehicle, involves transforming image recording such that internal degree of freedom is considered in standard views

Publications (1)

Publication Number Publication Date
DE102008060768A1 true DE102008060768A1 (en) 2009-09-10

Family

ID=40936448

Family Applications (1)

Application Number Title Priority Date Filing Date
DE200810060768 Withdrawn DE102008060768A1 (en) 2008-12-05 2008-12-05 Articulated object part e.g. pedestrian's hand forearm region, classifying method for use in driver assistance process of vehicle, involves transforming image recording such that internal degree of freedom is considered in standard views

Country Status (1)

Country Link
DE (1) DE102008060768A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306293A (en) * 2011-07-29 2012-01-04 南京多伦科技有限公司 Method for judging driver exam in actual road based on facial image identification technology
US9586585B2 (en) 2014-11-20 2017-03-07 Toyota Motor Engineering & Manufacturing North America, Inc. Autonomous vehicle detection of and response to traffic officer presence
DE102012025320B4 (en) 2012-12-22 2019-04-04 Audi Ag Method for controlling an electrical device by detecting and evaluating a non-contact manual operation input of a hand of an operator as well as suitable control device and vehicle

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10126375B4 (en) 2001-05-30 2004-03-25 Humanscan Gmbh Object detection method and system
DE10233233B4 (en) 2002-07-22 2005-04-28 Univ Muenchen Tech Detection of movements (dynamic gestures) for non-contact and soundless interaction with technical systems

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10126375B4 (en) 2001-05-30 2004-03-25 Humanscan Gmbh Object detection method and system
DE10233233B4 (en) 2002-07-22 2005-04-28 Univ Muenchen Tech Detection of movements (dynamic gestures) for non-contact and soundless interaction with technical systems

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306293A (en) * 2011-07-29 2012-01-04 南京多伦科技有限公司 Method for judging driver exam in actual road based on facial image identification technology
DE102012025320B4 (en) 2012-12-22 2019-04-04 Audi Ag Method for controlling an electrical device by detecting and evaluating a non-contact manual operation input of a hand of an operator as well as suitable control device and vehicle
US9586585B2 (en) 2014-11-20 2017-03-07 Toyota Motor Engineering & Manufacturing North America, Inc. Autonomous vehicle detection of and response to traffic officer presence

Similar Documents

Publication Publication Date Title
US10234957B2 (en) Information processing device and method, program and recording medium for identifying a gesture of a person from captured image data
Calandra et al. The feeling of success: Does touch sensing help predict grasp outcomes?
US9589177B2 (en) Enhanced face detection using depth information
US9235269B2 (en) System and method for manipulating user interface in vehicle using finger valleys
JP4625074B2 (en) Sign-based human-machine interaction
CN101511550B (en) Method for observation of person in industrial environment
JP5041458B2 (en) Device for detecting three-dimensional objects
US8315455B2 (en) Robot system, robot control device and method for controlling robot
US20150049195A1 (en) Image processing unit, object detection method, object detection program, and vehicle control system
KR20190016143A (en) Slam on a mobile device
DE102006048163B4 (en) Camera-based monitoring of moving machines and / or moving machine elements for collision prevention
JP4226623B2 (en) Work picking device
JP5726125B2 (en) Method and system for detecting an object in a depth image
US7418112B2 (en) Pedestrian detection apparatus
KR20130043222A (en) Gesture recognition system for tv control
US6838980B2 (en) Camera-based precrash detection system
JP4928571B2 (en) How to train a stereo detector
JP4899424B2 (en) Object detection device
JP4004899B2 (en) Article position / orientation detection apparatus and article removal apparatus
JP5812599B2 (en) information processing method and apparatus
JP4653606B2 (en) Image recognition apparatus, method and program
JP6305171B2 (en) How to detect objects in a scene
JP5201411B2 (en) Bulk picking device and control method thereof
JP4612635B2 (en) Moving object detection using computer vision adaptable to low illumination depth
KR20150083581A (en) Apparatus and method for multiple armas and hands detection and traking using 3d image

Legal Events

Date Code Title Description
OAV Applicant agreed to the publication of the unexamined application as to paragraph 31 lit. 2 z1
R119 Application deemed withdrawn, or ip right lapsed, due to non-payment of renewal fee