WO2017133453A1 - Method and system for tracking moving body - Google Patents
Method and system for tracking moving body Download PDFInfo
- Publication number
- WO2017133453A1 WO2017133453A1 PCT/CN2017/071510 CN2017071510W WO2017133453A1 WO 2017133453 A1 WO2017133453 A1 WO 2017133453A1 CN 2017071510 W CN2017071510 W CN 2017071510W WO 2017133453 A1 WO2017133453 A1 WO 2017133453A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- human body
- person
- sound
- camera
- tracking
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 230000033001 locomotion Effects 0.000 claims abstract description 49
- 230000005236 sound signal Effects 0.000 claims abstract description 13
- 230000009471 action Effects 0.000 claims abstract description 7
- 230000008859 change Effects 0.000 claims description 16
- 238000001514 detection method Methods 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 13
- 238000003384 imaging method Methods 0.000 claims description 11
- 239000002245 particle Substances 0.000 claims description 7
- 230000004927 fusion Effects 0.000 claims description 6
- 230000003287 optical effect Effects 0.000 claims description 6
- 238000012549 training Methods 0.000 claims description 6
- 230000000694 effects Effects 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000004590 computer program Methods 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 241000282412 Homo Species 0.000 description 3
- 230000004807 localization Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 1
- 230000001149 cognitive effect Effects 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/48—Matching video sequences
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Definitions
- the present invention relates to the field of automation technologies, and in particular, to a method and system for tracking a moving human body.
- the related technologies of human follow-up research mainly include three aspects: the detection of the followed person, the tracking of the followed person, and the obstacle avoidance of the robot in the following process.
- Some are based on RGBD sensors (such as Kinect, Xtion and Obi LIGHT) for human body following mobile robot control systems; in addition, special equipment rooms are also widely used for identification.
- Target person An intelligent environment detector is used to detect the surrounding environment of the robot, realize human recognition and stably follow the human body.
- the University of Tokyo set up multiple laser distance sensors into a system that recognizes the human leg and tracks pedestrians, or sets three laser distance sensors to detect the human body's legs, upper body and head, respectively. Tracking of the human body, but these devices are fixed.
- the above methods have drawbacks in practical applications.
- the RGBD sensor has the following disadvantages: 1) the target person cannot be occluded; 2) is not suitable for the mobile platform; 3) the special equipment room is also expensive and the robot's range of motion is limited.
- the laser ranging sensor has a wide measuring angle, if it is used to identify a person's leg, it will be difficult for the robot to determine which two feet are the target person and also for the woman wearing the skirt.
- the invention provides a moving human body tracking method and system, which can effectively detect the position of a follower in the field of view of the camera of the autonomous mobile robot platform through the combination of the sound localization technology and the frame difference method technology and the human body detection technology;
- Optical flow method or particle filter and Kalman filter and other visual-based moving object tracking methods realize the tracking of the following people's motion, which solves the problem that the following person is occluded to a certain extent, and ensures the continuous tracking of the target by the robot;
- the ordinary camera used as the following sensor reduces the system cost and effectively solves the problem of expensive use of other sensors.
- the technical solution of the present invention provides a method for tracking a moving human body, comprising the following steps:
- S101 The system collects time information of the sound signal and the sound reaching the respective positions, and sends the time information to the central controller;
- S104 determining whether the sound source is located in the imaging range of the camera, and if so, then turning to S106;
- S106 The central controller controls to turn on the color camera for video capture
- S108 determining whether the active area meets the requirements, if it is less than the lower threshold, then go to S101, if it is greater than the upper threshold, then go to S106;
- the human body detector determines whether the human body is detected according to the human body detection classifier obtained by offline training, and if not, then proceeds to S101;
- S112 determining whether the angle between the active area of the person and the system is matched with the angle of the sound source relative to the system, if less than the threshold, then moving to S101;
- S114 extract human body features of the currently tracked person, and train the target human body identifier, including but not limited to color, texture, edge contour, and size;
- S115 Send an action instruction according to the size and position of the current video frame by the followed person;
- S120 Perform human body recognition on the predicted tracking person active area by using the target human body recognizer
- S121 determining whether the human body recognition is successful, if successful, then moving to S123;
- the autonomous mobile robot detects the sound signal through the sound sensor
- the five arrayed sound sensors are fixed and do not move with the gimbal movement.
- angle ⁇ is an angle between the sound source and the positive direction of the system
- the ⁇ value is positive in the clockwise direction and negative in the counterclockwise direction.
- step S104 the determining whether the sound source is located in the imaging range of the camera, and if not, proceeding to S106, further comprising:
- the horizontal angle of view of the color camera is ⁇ ;
- the sound source is located in the imaging range of the camera
- the ⁇ is a threshold to ensure that the sound source can be completely within the field of view of the color camera.
- the human body detector determines whether the human body is detected according to the feature value of the object, and further includes:
- Whether or not the human body is detected is determined by the human body detection classifier.
- step S115 the issuing an action instruction according to the size and position of the current video frame by the followed person further includes:
- the size change of the motion area in the current video frame corresponds to the distance of the tracked person from the camera
- the change in position in the current video frame corresponds to the change in the azimuth angle of the tracked person in the positive direction of the system
- the direction of motion of the followed person is determined based on the change in the size of the motion region and the change in position in the current video frame.
- step S118 the predicting the active area of the tracked person in the current frame further includes:
- the method for predicting a location includes a single tracking algorithm and a fusion algorithm
- the single tracking algorithm includes an optical flow method, a particle filter tracking algorithm, and a Kalman filter tracking algorithm;
- the fusion algorithm uses multiple tracking algorithms to improve the effectiveness of the algorithm.
- the technical solution of the present invention further provides a sports human body tracking system, comprising: a central controller unit, a sound sensor unit, a camera unit, a motion unit, a pan/tilt, wherein
- the central controller unit is configured to analyze the sound signal, process the video information, control the rotation of the gimbal, calculate the position of the autonomous mobile robot and the motion track of the followed person, and issue a control command to the motion unit;
- the sound sensor unit is configured to receive the sound signal and transmit the sound information to the central controller unit;
- the camera unit is configured to obtain image information of an environment in which the autonomous mobile robot platform is located, and send an image signal to the central controller unit;
- the motion unit is configured to receive a control command and perform motion
- the pan/tilt rotates according to the command of the central control unit to adjust the camera shooting angle.
- the autonomous mobile robot is equipped with five sound sensors;
- the five arrayed sound sensors are fixed and do not move with the gimbal movement.
- the camera unit and the central controller unit are located in the head of the autonomous mobile robot;
- the pan/tilt can be rotated 360 degrees freely to ensure that the camera is at an appropriate angle
- the Yuntai is located above the autonomous mobile robot.
- the technical solution of the invention can effectively detect the position of the followed person in the field of view of the camera of the autonomous mobile robot platform through the combination of the sound localization technology and the frame difference method technology and the human body detection technology; and further adopts the optical flow method or the particle filter based on
- the visual moving object tracking method realizes the tracking of the following person's movement, and solves the problem that the followed person is occluded to a certain extent, It proves that the robot keeps track of the target; and controls the movement of the autonomous mobile robot platform to achieve follow-up by the follower; the common camera is used as the follow-up sensor, which reduces the system cost and effectively solves the problem of expensive use of other sensors.
- Embodiment 1 is a flowchart of a method for tracking a moving human body according to Embodiment 1 of the present invention
- FIG. 2 is a structural diagram of a moving human body tracking system according to Embodiment 1 of the present invention.
- FIG. 1 is a flowchart of a method for tracking a moving human body according to Embodiment 1 of the present invention. As shown in Figure 1, the process includes the following steps:
- S101 The system collects time information of the sound signal and the sound reaching the respective positions, and sends the time information to the central controller.
- the autonomous mobile robot detects the sound signal through the sound sensor
- the five arrayed sound sensors are fixed and do not move with the gimbal movement.
- a signal from the sound sensor is received by the central control unit to identify if it is a "follow command.”
- the ⁇ value is positive in the clockwise direction and negative in the counterclockwise direction.
- Step S104 It is determined whether the sound source is located in the imaging range of the camera, and if yes, the process proceeds to step S106.
- the wide angle of the color camera is ⁇ ;
- the sound source is located in the imaging range of the camera
- the ⁇ is a threshold to ensure that the sound source can be completely within the field of view of the color camera.
- S106 The central controller controls to turn on the color camera for video capture.
- S107 analyzing three consecutive frames of images by using a three-frame difference method to obtain a motion region of the current frame.
- S108 It is determined whether the active area meets the requirement. If it is less than the lower threshold, the process proceeds to S101, and if it is greater than the upper threshold, the process proceeds to S106.
- the human body detector determines whether the human body is detected according to the human body detection classifier obtained by offline training, and if not, then proceeds to S101.
- Whether or not the human body is detected is determined by the human body detection classifier.
- S111 Obtain an active area of the tracked person.
- S112 determining whether the angle between the active area of the person and the system is matched with the angle of the sound source relative to the system, If it is less than the threshold, it goes to S101.
- S113 Determine an activity area of the tracked person.
- S114 Extract human body features of the currently tracked person, and train a human body tracker, including but not limited to color, texture, edge contour, and size.
- S115 Issue an action instruction according to the size and position of the current video frame by the followed person.
- the size change of the motion area in the current video frame corresponds to the distance of the tracked person from the camera
- the change in position in the current video frame corresponds to the change in the azimuth angle of the tracked person in the positive direction of the system
- the direction of motion of the followed person is determined based on the change in the size of the motion region and the change in position in the current video frame.
- the method for predicting a location includes a single tracking algorithm and a fusion algorithm
- the single tracking algorithm includes an optical flow method, a particle filter tracking algorithm, and a Kalman filter tracking algorithm;
- the optical flow method estimates a position velocity field using a gray scale change of an image sequence with respect to time (t) and space (x, y);
- the particle filter tracking algorithm first performs feature extraction on the extracted motion region of the current video frame, and then approximates the feature probability density function by searching a set of random samples propagating in the state space, and replaces the sample feature mean value. Integral operation to obtain a state minimum variance distribution, that is, the position of the followed person in the next video frame;
- the fusion algorithm uses multiple tracking algorithms to improve the effectiveness of the algorithm.
- S120 Perform human body recognition on the predicted tracking person active area by using the target human body recognizer.
- S121 Determine whether the human body recognition is successful, and if successful, turn to S123.
- S122 The robot stops moving and turns to S106.
- S123 Extract the human body features of the tracked person, update the human body recognizer, and turn to S115.
- FIG. 2 is a structural diagram of the moving human body tracking system according to the first embodiment of the present invention.
- the system includes: a central controller unit 201, a sound sensor unit 202, a camera unit 203, a motion unit 204, and a cloud platform 205, wherein
- the central controller unit is configured to analyze the sound signal, process the video information, control the rotation of the gimbal, calculate the position of the autonomous mobile robot and the motion track of the followed person, and issue a control command to the motion unit;
- the sound sensor unit is configured to receive the sound signal and transmit the sound information to the central controller unit;
- the camera unit is configured to obtain image information of an environment in which the autonomous mobile robot platform is located, and send an image signal to the central controller unit;
- the motion unit is configured to receive a control command and perform motion
- the pan/tilt rotates according to the command of the central control unit to adjust the camera shooting angle.
- the autonomous mobile robot is equipped with five sound sensors;
- the five arrayed sound sensors are fixed and do not move with the gimbal movement.
- the camera unit and the central controller unit are located in the head of the autonomous mobile robot;
- the pan/tilt can be rotated 360 degrees freely to ensure that the camera is at an appropriate angle
- the Yuntai is located above the autonomous mobile robot.
- the technical solution of the invention can effectively detect the following person in the field of view of the camera of the autonomous mobile robot platform by combining the sound localization technology and the frame difference method technology and the human body detection technology.
- the position is further controlled by the optical-flow method or particle filter to realize the tracking of the following person's motion, which solves the problem that the following person is occluded to a certain extent, and ensures the continuous tracking of the target by the robot.
- the ordinary camera used as the following sensor reduces the system cost and effectively solves the problem of expensive using other sensors.
- embodiments of the present invention can be provided as a method, system, or automation device product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of an electronic device product embodied in one or more of the automated devices.
- the electronic devices, computer program instructions or electronic device devices can also be used in a readable memory of an automation device capable of directing a computer or other programmable data processing device to operate in a particular manner, such that instructions stored in the automation device readable memory
- An article of manufacture comprising instruction means is implemented which implements the functions specified in a block or blocks of a flow or a flow and/or a block diagram of the flowchart.
- These electronic devices, computer program instructions or electronic device devices can also be loaded onto an automation device or other programmable data processing device such that a series of operational steps are performed on an automated or other programmable device to produce an automated process whereby the automated device Or instructions executed on other programmable devices are provided for implementing one or more processes and/or block diagrams in the flowchart The steps of a function specified in a box or multiple boxes.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Manipulator (AREA)
Abstract
A method and system for tracking a moving body, comprising the following steps: receiving an audio signal; receiving a video signal; calculating the relative distance between an audio source and a system and the positive direction angle between the audio source and the system, determining whether the audio source is located in a photographing range of a camera; if a robot is stationary, employing a three-frame differencing algorithm to produce a movement area of a current frame, detecting whether a human body is present in the movement area, and issuing an action instruction on the basis of the size and position of the person being followed in a current video frame; if the robot is moving, predicting a movement area of the person being followed in the current frame, identifying the human body with respect to the predicted movement area, and issuing an action instruction on the basis of the size and position of the person being followed in the current video frame. The method and system effectively detect the position of the person being followed in the field of view of the camera of an autonomously moving robot platform, thus solving the problem of failed tracking when the body of the person being tracked is partly blocked, effectively implementing the tracking of movements of the person being followed, and reducing costs.
Description
本发明要求中国专利申请号201610073052.0,申请日为2016年02月02日,名称为“一种运动人体跟踪方法和系统”的优先权,该申请通过全文引入的方式合并于此。The present invention claims priority to Chinese Patent Application No. 201610073052.0, filed on Jan. 02, 2016, entitled "S.
本发明涉及自动化技术领域,特别涉及一种运动人体跟踪方法和系统。The present invention relates to the field of automation technologies, and in particular, to a method and system for tracking a moving human body.
近年来,机器人技术作为高新科技,已经逐步地渗透进我们生活的方方面面,从生产车间到医院,机器人所发挥的作用不可估量。传统的工业机器人适用于结构化环境,完成重复性作业任务,而现代机器人则希望同人类一起在相同的非结构化空间和环境中协同作业,实时在线完成非确定性的任务,当代机器人研究的领域已经从结构环境下的定点作业中走出来,向航空航天、星际探索、军事侦察攻击、水下地下管道、疾病检查治疗、抢险救灾等非结构环境下的自主作业方面发展;传统机器人属于多输入和单末端输出系统,而现代机器人则属于多输入和多末端输出系统;传统机器人在灵巧作业、在线感知、对人的行为和抽象命令的理解、认知与决策能力等诸多方面远逊于人,无法与人实现高效的沟通和交流。未来的机器人将在人类不能或难以到达的已知或未知环境里为人类工作,其中很多功能都是基于机器人的人体识别与跟随功能。因此,为了满足人们更高的需求,提高人机互动技术,机器人人体识别与跟随功能是亟需解决的关键问题。In recent years, as a high-tech technology, robotics has gradually penetrated into every aspect of our lives. From the production workshop to the hospital, the role played by robots is immeasurable. Traditional industrial robots are suitable for structured environments and perform repetitive tasks. Modern robots hope to work together with humans in the same unstructured space and environment to perform non-deterministic tasks online in real time. The field has emerged from the fixed-point operations in the structural environment, and has developed into autonomous operations in aerospace, interstellar exploration, military reconnaissance attacks, underwater underground pipelines, disease inspection and treatment, disaster relief and other non-structural environments; Input and single-ended output systems, while modern robots are multi-input and multi-end output systems; traditional robots are far less than in smart homework, online perception, understanding of human behavior and abstract commands, cognitive and decision-making capabilities, etc. People can't communicate and communicate effectively with people. Future robots will work for humans in known or unknown environments where humans cannot or are difficult to reach, many of which are based on robotic human recognition and follow-up functions. Therefore, in order to meet people's higher needs and improve human-computer interaction technology, robot human body recognition and follow-up function is an urgent problem to be solved.
人跟随研究的相关技术主要包括三个方面:被跟随人的检测,被跟随人的的跟踪和机器人在跟随过程中的避障。国际上有许多组织在研究机器人人体识别及跟随方法。有些基于RGBD传感器(如Kinect、Xtion和奥比中光)作人体跟随移动机器人控制系统;此外,特殊装备房也广泛地用于识别
目标人物。一个智能环境探测器被用于探测机器人的周围环境、实现人体识别和稳定地跟随人体。东京大学在试验中把多个镭射距离传感器设置成一个系统,该系统能够识别人体腿部并跟踪行人;或是设定3个镭射距离传感器分别检测人体的腿部、上身和头部,从而实现对人体的跟踪,但是这些装置是被固定的。The related technologies of human follow-up research mainly include three aspects: the detection of the followed person, the tracking of the followed person, and the obstacle avoidance of the robot in the following process. There are many organizations in the world studying the human body recognition and following methods. Some are based on RGBD sensors (such as Kinect, Xtion and Obi LIGHT) for human body following mobile robot control systems; in addition, special equipment rooms are also widely used for identification.
Target person. An intelligent environment detector is used to detect the surrounding environment of the robot, realize human recognition and stably follow the human body. In the experiment, the University of Tokyo set up multiple laser distance sensors into a system that recognizes the human leg and tracks pedestrians, or sets three laser distance sensors to detect the human body's legs, upper body and head, respectively. Tracking of the human body, but these devices are fixed.
上述方法在实际应用中都存在缺陷。RGBD传感器有如下缺点:1)目标人物不能被遮挡;2)不适于移动平台;3)此外特殊装备房也是花费昂贵且机器人的活动范围受限。镭射测距传感器虽然测量角度广阔,但是如果用于识别人的腿部,那么机器人将很难判断哪两支脚是目标人物的,同时也失效于穿裙子的女性。The above methods have drawbacks in practical applications. The RGBD sensor has the following disadvantages: 1) the target person cannot be occluded; 2) is not suitable for the mobile platform; 3) the special equipment room is also expensive and the robot's range of motion is limited. Although the laser ranging sensor has a wide measuring angle, if it is used to identify a person's leg, it will be difficult for the robot to determine which two feet are the target person and also for the woman wearing the skirt.
发明内容Summary of the invention
本发明提供一种运动人体跟踪方法和系统,能够通过声音定位技术和帧差法技术和人体检测技术相结合,可以有效地检测到被跟随人在自主移动机器人平台摄像头视野中的位置;进而采用光流法或粒子滤波和卡尔曼滤波等基于视觉的运动物体跟踪方法实现对被跟随人运动的跟踪,在一定程度上解决了被跟随人被遮挡的问题,保证了机器人对目标的持续跟踪;并通过控制自主移动机器人平台运动实现对被跟随人的跟随;所采用普通摄像头作为跟随传感器,降低了系统成本,有效地解决了采用其他传感器成本昂贵的问题。The invention provides a moving human body tracking method and system, which can effectively detect the position of a follower in the field of view of the camera of the autonomous mobile robot platform through the combination of the sound localization technology and the frame difference method technology and the human body detection technology; Optical flow method or particle filter and Kalman filter and other visual-based moving object tracking methods realize the tracking of the following people's motion, which solves the problem that the following person is occluded to a certain extent, and ensures the continuous tracking of the target by the robot; And by controlling the movement of the autonomous mobile robot platform to achieve the follow-up of the following person; the ordinary camera used as the following sensor reduces the system cost and effectively solves the problem of expensive use of other sensors.
本发明的技术方案提供了一种运动人体跟踪方法,包括以下步骤:The technical solution of the present invention provides a method for tracking a moving human body, comprising the following steps:
S101:系统采集声音信号和声音到达各自位置的时间信息,并发送给中央控制器;S101: The system collects time information of the sound signal and the sound reaching the respective positions, and sends the time information to the central controller;
S102:判断是否为“跟随指令”,若否,则返回S101;S102: determining whether it is a "following instruction", if not, returning to S101;
S103:计算声源与系统的相对距离和声源与系统正方向的夹角β;S103: calculating a relative distance between the sound source and the system and an angle β between the sound source and the positive direction of the system;
S104:判断声源是否位于摄像头的摄像范围,若是,则转向S106;
S104: determining whether the sound source is located in the imaging range of the camera, and if so, then turning to S106;
S105:云台转动β角度;S105: the pan/tilt rotates by a β angle;
S106:中央控制器控制打开彩色摄像头进行视频采集;S106: The central controller controls to turn on the color camera for video capture;
S107:采用三帧差法对连续三帧图像进行分析,得出当前帧的运动区域;S107: analyzing three consecutive frames of images by using a three-frame difference method to obtain a motion region of the current frame;
S108:判断活动区域是否符合要求,若小于下阈值则转向S101,若大于上阈值则转向S106;S108: determining whether the active area meets the requirements, if it is less than the lower threshold, then go to S101, if it is greater than the upper threshold, then go to S106;
S109:提取当前视频帧中符合要求的运动区域;S109: Extract a motion area that meets the requirements in the current video frame;
S110:人体检测器根据离线训练得到的人体检测分类器判断是否检测到人体,若否,则转向S101;S110: The human body detector determines whether the human body is detected according to the human body detection classifier obtained by offline training, and if not, then proceeds to S101;
S111:得到被跟踪人的活动区域;S111: obtaining an active area of the tracked person;
S112:判断人活动区域相对系统的夹角与声源相对系统夹角是否匹配,若小于阈值,则转向S101;S112: determining whether the angle between the active area of the person and the system is matched with the angle of the sound source relative to the system, if less than the threshold, then moving to S101;
S113:确定被跟踪人的活动区域;S113: determining an activity area of the tracked person;
S114:提取当前被跟踪人的人体特征,训练目标人体识别器,所述人体特征包括但不限于颜色、纹理、边缘轮廓、尺寸;S114: extract human body features of the currently tracked person, and train the target human body identifier, including but not limited to color, texture, edge contour, and size;
S115:根据被跟随人在当前视频帧的大小和位置发出行动指令;S115: Send an action instruction according to the size and position of the current video frame by the followed person;
S116:判断是否收到“停止跟随”命令,若是则转向S124;S116: determining whether a "stop following" command is received, and if yes, proceeding to S124;
S117:彩色摄像头进行视频采集;S117: color camera for video capture;
S118:预测被跟踪人在当前帧中的活动区域;S118: predicting an active area of the tracked person in the current frame;
S119:判断是否预测成功,若预测失败则转向S122;S119: determining whether the prediction is successful, if the prediction fails, then moving to S122;
S120:使用目标人体识别器对预测跟踪人活动区域进行人体识别;S120: Perform human body recognition on the predicted tracking person active area by using the target human body recognizer;
S121:判断人体识别是否成功,若成功则转向S123;S121: determining whether the human body recognition is successful, if successful, then moving to S123;
S122:机器人停止运动,并转向S106;S122: the robot stops moving and turns to S106;
S123:提取被跟踪人的人体特征,更新人体识别器,转向S115;S123: extracting the human body features of the tracked person, updating the human body identifier, and turning to S115;
S124:结束。S124: End.
进一步的,自主移动机器人通过声音传感器探测声音信号;
Further, the autonomous mobile robot detects the sound signal through the sound sensor;
4个声音传感器均布在自主移动机器人外围四周,1个声音传感器位于云台顶部;Four sound sensors are distributed around the periphery of the autonomous mobile robot, and one sound sensor is located at the top of the pan/tilt;
5个阵列式声音传感器均为固定式安装,不随云台运动而运动。The five arrayed sound sensors are fixed and do not move with the gimbal movement.
进一步的,所述夹角β为声源与系统正方向的夹角;Further, the angle β is an angle between the sound source and the positive direction of the system;
顺时针方向时β值为正,逆时针方向时β值为负。The β value is positive in the clockwise direction and negative in the counterclockwise direction.
进一步的,在步骤S104中,所述判断声源是否位于摄像头的摄像范围,若否,则转向S106,进一步包括:Further, in step S104, the determining whether the sound source is located in the imaging range of the camera, and if not, proceeding to S106, further comprising:
彩色摄像头的拍摄水平视场角为α;The horizontal angle of view of the color camera is α;
若|β|<α/2-θ,则声源位于摄像头的摄像范围;If |β|<α/2-θ, the sound source is located in the imaging range of the camera;
若|β|>=α/2-θ,则声源位于摄像头的摄像范围之外;If |β|>=α/2-θ, the sound source is outside the imaging range of the camera;
所述θ为阈值,保证声源可以完全位于彩色摄像头视野内。The θ is a threshold to ensure that the sound source can be completely within the field of view of the color camera.
进一步的,在步骤S110中,所述人体检测器根据所述物体的特征值判断是否检测到人体,进一步包括:Further, in step S110, the human body detector determines whether the human body is detected according to the feature value of the object, and further includes:
采用HOG和HAAR特征或采用DPM模型,采取SVM学习方法或Adaboost学习方法,离线训练人体模型,生成人体检测分类器;Adopting the HOG and HAAR features or adopting the DPM model, adopting the SVM learning method or the Adaboost learning method, training the human body model offline, and generating a human body detection classifier;
由所述人体检测分类器判断是否检测到人体。Whether or not the human body is detected is determined by the human body detection classifier.
进一步的,在步骤S115中,所述根据被跟随人在当前视频帧的大小和位置发出行动指令,进一步包括:Further, in step S115, the issuing an action instruction according to the size and position of the current video frame by the followed person further includes:
当前视频帧中运动区域的大小变化对应被跟踪人距离摄像头的远近;The size change of the motion area in the current video frame corresponds to the distance of the tracked person from the camera;
位于当前视频帧中的位置变化对应被跟踪人位于系统正方向的方位角度变化;The change in position in the current video frame corresponds to the change in the azimuth angle of the tracked person in the positive direction of the system;
根据所述运动区域的大小变化和所述位于当前视频帧中的位置变化来判断被跟随人的运动方向。The direction of motion of the followed person is determined based on the change in the size of the motion region and the change in position in the current video frame.
进一步的,在步骤S118中,所述预测被跟踪人在当前帧中的活动区域,进一步包括:Further, in step S118, the predicting the active area of the tracked person in the current frame further includes:
根据上一帧中提取得被跟踪人人体特征进行预测;
Predicting according to the human body characteristics of the tracked person extracted in the previous frame;
所述预测位置的方法包括单一跟踪算法和融合算法;The method for predicting a location includes a single tracking algorithm and a fusion algorithm;
所述单一跟踪算法包括光流法,粒子滤波跟踪算法和卡尔曼滤波跟踪算法;The single tracking algorithm includes an optical flow method, a particle filter tracking algorithm, and a Kalman filter tracking algorithm;
所述融合算法为采用多种跟踪算法以提高算法的有效性。The fusion algorithm uses multiple tracking algorithms to improve the effectiveness of the algorithm.
本发明的技术方案还提供了一种运动人体跟踪系统,包括:中央控制器单元,声音传感器单元,摄像头单元,运动单元,云台,其中,The technical solution of the present invention further provides a sports human body tracking system, comprising: a central controller unit, a sound sensor unit, a camera unit, a motion unit, a pan/tilt, wherein
中央控制器单元用于分析声音信号,处理视频信息,控制云台的旋转,计算自主移动机器人的位置和被跟随人的运动轨迹,并向运动单元发出控制命令;The central controller unit is configured to analyze the sound signal, process the video information, control the rotation of the gimbal, calculate the position of the autonomous mobile robot and the motion track of the followed person, and issue a control command to the motion unit;
声音传感器单元用于接收声音信号,并向中央控制器单元发送声音信息;The sound sensor unit is configured to receive the sound signal and transmit the sound information to the central controller unit;
摄像头单元用于获得自主移动机器人平台所处环境的图像信息,向中央控制器单元发送图像信号;The camera unit is configured to obtain image information of an environment in which the autonomous mobile robot platform is located, and send an image signal to the central controller unit;
运动单元用于接收控制命令,并进行运动;The motion unit is configured to receive a control command and perform motion;
云台根据中央控制单元的命令进行旋转,调整摄像头拍摄角度。The pan/tilt rotates according to the command of the central control unit to adjust the camera shooting angle.
进一步的,自主移动机器人安装有5个声音传感器;Further, the autonomous mobile robot is equipped with five sound sensors;
4个声音传感器均布在自主移动机器人外围四周,一个声音传感器位于云台顶部;4 sound sensors are distributed around the periphery of the autonomous mobile robot, and a sound sensor is located at the top of the pan/tilt;
5个阵列式声音传感器均为固定式安装,不随云台运动而运动。The five arrayed sound sensors are fixed and do not move with the gimbal movement.
进一步的,摄像头单元和中央控制器单元位于自主移动机器人的云台;Further, the camera unit and the central controller unit are located in the head of the autonomous mobile robot;
云台可360度自由旋转,保证摄像头处在合适的角度;The pan/tilt can be rotated 360 degrees freely to ensure that the camera is at an appropriate angle;
云台位于自主移动机器人的上方。The Yuntai is located above the autonomous mobile robot.
本发明技术方案能够通过声音定位技术和帧差法技术和人体检测技术相结合,可以有效地检测到被跟随人在自主移动机器人平台摄像头视野中的位置;进而采用光流法或粒子滤波等基于视觉的运动物体跟踪方法实现对被跟随人运动的跟踪,在一定程度上解决了被跟随人被遮挡的问题,保
证了机器人对目标的持续跟踪;并通过控制自主移动机器人平台运动实现对被跟随人的跟随;所采用普通摄像头作为跟随传感器,降低了系统成本,有效地解决了采用其他传感器成本昂贵的问题。The technical solution of the invention can effectively detect the position of the followed person in the field of view of the camera of the autonomous mobile robot platform through the combination of the sound localization technology and the frame difference method technology and the human body detection technology; and further adopts the optical flow method or the particle filter based on The visual moving object tracking method realizes the tracking of the following person's movement, and solves the problem that the followed person is occluded to a certain extent,
It proves that the robot keeps track of the target; and controls the movement of the autonomous mobile robot platform to achieve follow-up by the follower; the common camera is used as the follow-up sensor, which reduces the system cost and effectively solves the problem of expensive use of other sensors.
本发明的其它特征和优点将在随后的说书中阐述,并且,部分地从说明书中变得显而易见,或者通过实施本发明而了解。本发明的目的和其他优点可通过在所写的说明书、权利要求书、以及附图中所特别指出的结构来实现和获得。Other features and advantages of the invention will be set forth in part in the description which follows. The objectives and other advantages of the invention may be realized and obtained by means of the structure particularly pointed in the appended claims.
下面通过附图和实施例,对本发明的技术方案做进一步的详细描述。The technical solution of the present invention will be further described in detail below through the accompanying drawings and embodiments.
附图用来提供对本发明的进一步理解,并且构成说明书的一部分,与本发明的实施例一起用于解释本发明,并不构成对本发明的限制。在附图中:The drawings are intended to provide a further understanding of the invention, and are intended to be a In the drawing:
图1为本发明实施例一中运动人体跟踪方法流程图;1 is a flowchart of a method for tracking a moving human body according to Embodiment 1 of the present invention;
图2为本发明实施例一中运动人体跟踪系统结构图。2 is a structural diagram of a moving human body tracking system according to Embodiment 1 of the present invention.
以下结合附图对本发明的优选实施例进行说明,应当理解,此处所描述的优选实施例仅用于说明和解释本发明,并不用于限定本发明。The preferred embodiments of the present invention are described with reference to the accompanying drawings, which are intended to illustrate and illustrate the invention.
图1为本发明实施例一中运动人体跟踪方法流程图。如图1所示,该流程包括以下步骤:FIG. 1 is a flowchart of a method for tracking a moving human body according to Embodiment 1 of the present invention. As shown in Figure 1, the process includes the following steps:
S101:系统采集声音信号和声音到达各自位置的时间信息,并发送给中央控制器。S101: The system collects time information of the sound signal and the sound reaching the respective positions, and sends the time information to the central controller.
自主移动机器人通过声音传感器探测声音信号;The autonomous mobile robot detects the sound signal through the sound sensor;
4个声音传感器均布在自主移动机器人外围四周,1个声音传感器位于云台顶部;Four sound sensors are distributed around the periphery of the autonomous mobile robot, and one sound sensor is located at the top of the pan/tilt;
5个阵列式声音传感器均为固定式安装,不随云台运动而运动。
The five arrayed sound sensors are fixed and do not move with the gimbal movement.
S102:判断是否为“跟随指令”,若否,则返回S101。S102: It is judged whether it is a "following instruction", and if not, it returns to S101.
由中央控制单元接收来自声音传感器的信号,识别是否为“跟随指令”。A signal from the sound sensor is received by the central control unit to identify if it is a "follow command."
S103:计算声源与系统的相对距离和声源与系统正方向的夹角β。S103: Calculate the relative distance between the sound source and the system and the angle β between the sound source and the positive direction of the system.
由中央控制单元接收来自声音传感器的信号,计算声源与系统的相对距离和声源与系统正方向的夹角β;Receiving a signal from the sound sensor by the central control unit, calculating a relative distance between the sound source and the system and an angle β between the sound source and the positive direction of the system;
顺时针方向时β值为正,逆时针方向时β值为负。The β value is positive in the clockwise direction and negative in the counterclockwise direction.
步骤S104:判断声源是否位于摄像头的摄像范围,若是,则转向步骤S106。Step S104: It is determined whether the sound source is located in the imaging range of the camera, and if yes, the process proceeds to step S106.
彩色摄像头的拍摄广角为α;The wide angle of the color camera is α;
若|β|<α/2-θ,则声源位于摄像头的摄像范围;If |β|<α/2-θ, the sound source is located in the imaging range of the camera;
若|β|>=α/2-θ,则声源位于摄像头的摄像范围之外;If |β|>=α/2-θ, the sound source is outside the imaging range of the camera;
所述θ为阈值,保证声源可以完全位于彩色摄像头视野内。The θ is a threshold to ensure that the sound source can be completely within the field of view of the color camera.
S105:云台转动β角度。S105: The pan/tilt rotates by β angle.
S106:中央控制器控制打开彩色摄像头进行视频采集。S106: The central controller controls to turn on the color camera for video capture.
S107:采用三帧差法对连续三帧图像进行分析,得出当前帧的运动区域。S107: analyzing three consecutive frames of images by using a three-frame difference method to obtain a motion region of the current frame.
S108:判断活动区域是否符合要求,若小于下阈值则转向S101,若大于上阈值则转向S106。S108: It is determined whether the active area meets the requirement. If it is less than the lower threshold, the process proceeds to S101, and if it is greater than the upper threshold, the process proceeds to S106.
S109:提取当前视频帧中符合要求的运动区域。S109: Extract a motion area that meets the requirements in the current video frame.
S110:人体检测器根据离线训练得到的人体检测分类器判断是否检测到人体,若否,则转向S101。S110: The human body detector determines whether the human body is detected according to the human body detection classifier obtained by offline training, and if not, then proceeds to S101.
采用HOG和HAAR特征或采用DPM模型,采取SVM学习方法或Adaboost学习方法,离线训练人体模型,生成人体检测分类器;Adopting the HOG and HAAR features or adopting the DPM model, adopting the SVM learning method or the Adaboost learning method, training the human body model offline, and generating a human body detection classifier;
由所述人体检测分类器判断是否检测到人体。Whether or not the human body is detected is determined by the human body detection classifier.
S111:得到被跟踪人的活动区域。S111: Obtain an active area of the tracked person.
S112:判断人活动区域相对系统的夹角与声源相对系统夹角是否匹配,
若小于阈值,则转向S101。S112: determining whether the angle between the active area of the person and the system is matched with the angle of the sound source relative to the system,
If it is less than the threshold, it goes to S101.
S113:确定被跟踪人的活动区域。S113: Determine an activity area of the tracked person.
S114:提取当前被跟踪人的人体特征,训练人体跟踪器,所述人体特征包括但不限于颜色、纹理、边缘轮廓、尺寸。S114: Extract human body features of the currently tracked person, and train a human body tracker, including but not limited to color, texture, edge contour, and size.
S115:根据被跟随人在当前视频帧的大小和位置发出行动指令。S115: Issue an action instruction according to the size and position of the current video frame by the followed person.
当前视频帧中运动区域的大小变化对应被跟踪人距离摄像头的远近;The size change of the motion area in the current video frame corresponds to the distance of the tracked person from the camera;
位于当前视频帧中的位置变化对应被跟踪人位于系统正方向的方位角度变化;The change in position in the current video frame corresponds to the change in the azimuth angle of the tracked person in the positive direction of the system;
根据所述运动区域的大小变化和所述位于当前视频帧中的位置变化来判断被跟随人的运动方向。The direction of motion of the followed person is determined based on the change in the size of the motion region and the change in position in the current video frame.
S116:判断是否收到“停止跟随”命令,若是,则转向S124。S116: It is judged whether the "stop following" command is received, and if yes, the process proceeds to S124.
S117:彩色摄像头进行视频采集。S117: Color camera for video capture.
S118:预测被跟踪人在当前帧中的活动区域。S118: predict the active area of the tracked person in the current frame.
根据上一帧中提取得被跟踪人人体特征进行预测;Predicting according to the human body characteristics of the tracked person extracted in the previous frame;
所述预测位置的方法包括单一跟踪算法和融合算法;The method for predicting a location includes a single tracking algorithm and a fusion algorithm;
所述单一跟踪算法包括光流法,粒子滤波跟踪算法和卡尔曼滤波跟踪算法;The single tracking algorithm includes an optical flow method, a particle filter tracking algorithm, and a Kalman filter tracking algorithm;
所述光流法利用图像序列关于时间(t)与空间(x,y)的灰度变化来估计位置速度场;The optical flow method estimates a position velocity field using a gray scale change of an image sequence with respect to time (t) and space (x, y);
所述粒子滤波跟踪算法首先对提取到的当前视频帧中被跟随人的运动区域进行特征提取,随后通过寻找一组在状态空间传播的随机样本对特征概率密度函数进行近似,以样本特征均值代替积分运算,从而获得状态最小方差分布,即被跟随人在下一视频帧中的位置;The particle filter tracking algorithm first performs feature extraction on the extracted motion region of the current video frame, and then approximates the feature probability density function by searching a set of random samples propagating in the state space, and replaces the sample feature mean value. Integral operation to obtain a state minimum variance distribution, that is, the position of the followed person in the next video frame;
所述融合算法为采用多种跟踪算法以提高算法的有效性。The fusion algorithm uses multiple tracking algorithms to improve the effectiveness of the algorithm.
S119:判断是否预测成功,若预测失败则转向S122。S119: It is judged whether the prediction is successful, and if the prediction fails, the process proceeds to S122.
S120:使用目标人体识别器对预测跟踪人活动区域进行人体识别。
S120: Perform human body recognition on the predicted tracking person active area by using the target human body recognizer.
S121:判断人体识别是否成功,若成功则转向S123。S121: Determine whether the human body recognition is successful, and if successful, turn to S123.
S122:机器人停止运动,并转向S106。S122: The robot stops moving and turns to S106.
S123:提取被跟踪人的人体特征,更新人体识别器,转向S115。S123: Extract the human body features of the tracked person, update the human body recognizer, and turn to S115.
S124:结束。S124: End.
为了实现上述服务器代码更新的流程,本实施例还提供了一种运动人体跟踪系统,图2为本发明实施例一中运动人体跟踪系统结构图。如图2所示,该系统包括:中央控制器单元201,声音传感器单元202,摄像头单元203,运动单元204,云台205,其中,In order to implement the above process of updating the server code, the embodiment further provides a moving human body tracking system, and FIG. 2 is a structural diagram of the moving human body tracking system according to the first embodiment of the present invention. As shown in FIG. 2, the system includes: a central controller unit 201, a sound sensor unit 202, a camera unit 203, a motion unit 204, and a cloud platform 205, wherein
中央控制器单元用于分析声音信号,处理视频信息,控制云台的旋转,计算自主移动机器人的位置和被跟随人的运动轨迹,并向运动单元发出控制命令;The central controller unit is configured to analyze the sound signal, process the video information, control the rotation of the gimbal, calculate the position of the autonomous mobile robot and the motion track of the followed person, and issue a control command to the motion unit;
声音传感器单元用于接收声音信号,并向中央控制器单元发送声音信息;The sound sensor unit is configured to receive the sound signal and transmit the sound information to the central controller unit;
摄像头单元用于获得自主移动机器人平台所处环境的图像信息,向中央控制器单元发送图像信号;The camera unit is configured to obtain image information of an environment in which the autonomous mobile robot platform is located, and send an image signal to the central controller unit;
运动单元用于接收控制命令,并进行运动;The motion unit is configured to receive a control command and perform motion;
云台根据中央控制单元的命令进行旋转,调整摄像头拍摄角度。The pan/tilt rotates according to the command of the central control unit to adjust the camera shooting angle.
进一步的,自主移动机器人安装有5个声音传感器;Further, the autonomous mobile robot is equipped with five sound sensors;
4个声音传感器均布在自主移动机器人外围四周,1个声音传感器位于云台顶部;Four sound sensors are distributed around the periphery of the autonomous mobile robot, and one sound sensor is located at the top of the pan/tilt;
5个阵列式声音传感器均为固定式安装,不随云台运动而运动。The five arrayed sound sensors are fixed and do not move with the gimbal movement.
进一步的,摄像头单元和中央控制器单元位于自主移动机器人的云台;Further, the camera unit and the central controller unit are located in the head of the autonomous mobile robot;
云台可360度自由旋转,保证摄像头处在合适的角度;The pan/tilt can be rotated 360 degrees freely to ensure that the camera is at an appropriate angle;
云台位于自主移动机器人的上方。The Yuntai is located above the autonomous mobile robot.
本发明技术方案能够通过声音定位技术和帧差法技术和人体检测技术相结合,可以有效地检测到被跟随人在自主移动机器人平台摄像头视野中
的位置;进而采用光流法或粒子滤波等基于视觉的运动物体跟踪方法实现对被跟随人运动的跟踪,在一定程度上解决了被跟随人被遮挡的问题,保证了机器人对目标的持续跟踪;并通过控制自主移动机器人平台运动实现对被跟随人的跟随;所采用普通摄像头作为跟随传感器,降低了系统成本,有效地解决了采用其他传感器成本昂贵的问题。The technical solution of the invention can effectively detect the following person in the field of view of the camera of the autonomous mobile robot platform by combining the sound localization technology and the frame difference method technology and the human body detection technology.
The position is further controlled by the optical-flow method or particle filter to realize the tracking of the following person's motion, which solves the problem that the following person is occluded to a certain extent, and ensures the continuous tracking of the target by the robot. And by controlling the movement of the autonomous mobile robot platform to achieve the follow-up of the following person; the ordinary camera used as the following sensor reduces the system cost and effectively solves the problem of expensive using other sensors.
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或自动化设备产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有自动化设备上实施的电子设备产品的形式。Those skilled in the art will appreciate that embodiments of the present invention can be provided as a method, system, or automation device product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of an electronic device product embodied in one or more of the automated devices.
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由电子器件和计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些电子器件、计算机程序指令或电子设备器件到通用电子设备、专用电子设备、附属电子设备或其他类型的电子设备以产生一个自动化设备机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (system), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowcharts and/or FIG. These electronic devices, computer program instructions, or electronic device devices can be provided to general purpose electronic devices, specialized electronic devices, accessory electronic devices, or other types of electronic devices to produce an automated device machine for processing by a computer or other programmable data processing device. The instructions executed by the apparatus generate means for implementing the functions specified in one or more blocks of the flowchart or in a block or blocks of the flowchart.
这些电子器件、计算机程序指令或电子设备器件也可使用在能引导计算机或其他可编程数据处理设备以特定方式工作的自动化设备的可读存储器中,使得存储在该自动化设备可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。The electronic devices, computer program instructions or electronic device devices can also be used in a readable memory of an automation device capable of directing a computer or other programmable data processing device to operate in a particular manner, such that instructions stored in the automation device readable memory An article of manufacture comprising instruction means is implemented which implements the functions specified in a block or blocks of a flow or a flow and/or a block diagram of the flowchart.
这些电子器件、计算机程序指令或电子设备器件也可装载到自动化设备或其他可编程数据处理设备上,使得在自动化或其他可编程设备上执行一系列操作步骤以产生自动化的处理,从而在自动化设备或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图
一个方框或多个方框中指定的功能的步骤。These electronic devices, computer program instructions or electronic device devices can also be loaded onto an automation device or other programmable data processing device such that a series of operational steps are performed on an automated or other programmable device to produce an automated process whereby the automated device Or instructions executed on other programmable devices are provided for implementing one or more processes and/or block diagrams in the flowchart
The steps of a function specified in a box or multiple boxes.
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。
It is apparent that those skilled in the art can make various modifications and variations to the invention without departing from the spirit and scope of the invention. Thus, it is intended that the present invention cover the modifications and modifications of the invention
Claims (10)
- 一种运动人体跟踪方法,其特征在于,包括以下步骤:A method for tracking a moving human body, comprising the steps of:S101:系统采集声音信号和声音到达各自位置的时间信息,并发送给中央控制器;S101: The system collects time information of the sound signal and the sound reaching the respective positions, and sends the time information to the central controller;S102:判断是否为“跟随指令”,若否,则返回S101;S102: determining whether it is a "following instruction", if not, returning to S101;S103:计算声源与系统的相对距离和声源与系统正方向的夹角β;S103: calculating a relative distance between the sound source and the system and an angle β between the sound source and the positive direction of the system;S104:判断声源是否位于摄像头的摄像范围,若是,则转向S106;S104: determining whether the sound source is located in the imaging range of the camera, and if so, then turning to S106;S105:云台转动β角度;S105: the pan/tilt rotates by a β angle;S106:中央控制器控制打开彩色摄像头进行视频采集;S106: The central controller controls to turn on the color camera for video capture;S107:采用三帧差法对连续三帧图像进行分析,得出当前帧的运动区域;S107: analyzing three consecutive frames of images by using a three-frame difference method to obtain a motion region of the current frame;S108:判断活动区域是否符合要求,若小于下阈值则转向S101,若大于上阈值则转向S106;S108: determining whether the active area meets the requirements, if it is less than the lower threshold, then go to S101, if it is greater than the upper threshold, then go to S106;S109:提取当前视频帧中符合要求的运动区域;S109: Extract a motion area that meets the requirements in the current video frame;S110:人体检测器根据离线训练得到的人体检测分类器判断是否检测到人体,若否,则转向S101;S110: The human body detector determines whether the human body is detected according to the human body detection classifier obtained by offline training, and if not, then proceeds to S101;S111:得到被跟踪人的活动区域;S111: obtaining an active area of the tracked person;S112:判断人活动区域相对系统的夹角与声源相对系统夹角是否匹配,若小于阈值,则转向S101;S112: determining whether the angle between the active area of the person and the system is matched with the angle of the sound source relative to the system, if less than the threshold, then moving to S101;S113:确定被跟踪人的活动区域;S113: determining an activity area of the tracked person;S114:提取当前被跟踪人的人体特征,训练目标人体识别器,所述人体特征包括但不限于颜色、纹理、边缘轮廓、尺寸;S114: extract human body features of the currently tracked person, and train the target human body identifier, including but not limited to color, texture, edge contour, and size;S115:根据被跟随人在当前视频帧的大小和位置发出行动指令;S115: Send an action instruction according to the size and position of the current video frame by the followed person;S116:判断是否收到“停止跟随”命令,若是则转向S124;S116: determining whether a "stop following" command is received, and if yes, proceeding to S124;S117:彩色摄像头进行视频采集; S117: color camera for video capture;S118:预测被跟踪人在当前帧中的活动区域;S118: predicting an active area of the tracked person in the current frame;S119:判断是否预测成功,若预测失败则转向S122;S119: determining whether the prediction is successful, if the prediction fails, then moving to S122;S120:使用目标人体识别器对预测跟踪人活动区域进行人体识别;S120: Perform human body recognition on the predicted tracking person active area by using the target human body recognizer;S121:判断人体识别是否成功,若成功则转向S123;S121: determining whether the human body recognition is successful, if successful, then moving to S123;S122:机器人停止运动,并转向S106;S122: the robot stops moving and turns to S106;S123:提取被跟踪人的人体特征,更新人体识别器,转向S115;S123: extracting the human body features of the tracked person, updating the human body identifier, and turning to S115;S124:结束。S124: End.
- 根据权利要求1所述的方法,其特征在于,进一步包括:The method of claim 1 further comprising:自主移动机器人通过声音传感器探测声音信号;The autonomous mobile robot detects the sound signal through the sound sensor;4个声音传感器均布在自主移动机器人外围四周,1个声音传感器位于云台顶部;Four sound sensors are distributed around the periphery of the autonomous mobile robot, and one sound sensor is located at the top of the pan/tilt;5个阵列式声音传感器均为固定式安装,不随云台运动而运动。The five arrayed sound sensors are fixed and do not move with the gimbal movement.
- 根据权利要求1所述的方法,其特征在于,进一步包括:The method of claim 1 further comprising:所述夹角β为声源与系统正方向的夹角;The angle β is an angle between the sound source and the positive direction of the system;顺时针方向时β值为正,逆时针方向时β值为负。The β value is positive in the clockwise direction and negative in the counterclockwise direction.
- 根据权利要求1所述的方法,其特征在于,在步骤S104中,所述判断声源是否位于摄像头的摄像范围,若否,则转向S106,进一步包括:The method according to claim 1, wherein in step S104, the determining whether the sound source is located in the imaging range of the camera, and if not, proceeding to S106, further comprising:彩色摄像头的拍摄水平视场角为α;The horizontal angle of view of the color camera is α;若|β|<α/2-θ,则声源位于摄像头的摄像范围;If |β|<α/2-θ, the sound source is located in the imaging range of the camera;若|β|>=α/2-θ,则声源位于摄像头的摄像范围之外;If |β|>=α/2-θ, the sound source is outside the imaging range of the camera;所述θ为阈值,保证声源可以完全位于彩色摄像头视野内。The θ is a threshold to ensure that the sound source can be completely within the field of view of the color camera.
- 根据权利要求1所述的方法,其特征在于,在步骤S110中,所述人体检测器根据所述物体的特征值判断是否检测到人体,进一步包括:The method according to claim 1, wherein in step S110, the human body detector determines whether the human body is detected according to the feature value of the object, and further includes:采用HOG和HAAR特征或采用DPM模型,采取SVM学习方法或Adaboost学习方法,离线训练人体模型,生成人体检测分类器;Adopting the HOG and HAAR features or adopting the DPM model, adopting the SVM learning method or the Adaboost learning method, training the human body model offline, and generating a human body detection classifier;由所述人体检测分类器判断是否检测到人体。 Whether or not the human body is detected is determined by the human body detection classifier.
- 根据权利要求1所述的方法,其特征在于,在步骤S115中,所述根据被跟随人在当前视频帧的大小和位置发出行动指令,进一步包括:The method according to claim 1, wherein in step S115, the issuing an action instruction according to the size and position of the current video frame by the followed person further comprises:当前视频帧中运动区域的大小变化对应被跟踪人距离摄像头的远近;The size change of the motion area in the current video frame corresponds to the distance of the tracked person from the camera;位于当前视频帧中的位置变化对应被跟踪人位于系统正方向的方位角度变化;The change in position in the current video frame corresponds to the change in the azimuth angle of the tracked person in the positive direction of the system;根据所述运动区域的大小变化和所述位于当前视频帧中的位置变化来判断被跟随人的运动方向。The direction of motion of the followed person is determined based on the change in the size of the motion region and the change in position in the current video frame.
- 根据权利要求1所述的方法,其特征在于,在步骤S118中,所述预测被跟踪人在当前帧中的活动区域,进一步包括:The method according to claim 1, wherein in step S118, the predicting the active area of the tracked person in the current frame further comprises:根据上一帧中提取得被跟踪人人体特征进行预测;Predicting according to the human body characteristics of the tracked person extracted in the previous frame;所述预测位置的方法包括单一跟踪算法和融合算法;The method for predicting a location includes a single tracking algorithm and a fusion algorithm;所述单一跟踪算法包括光流法,粒子滤波跟踪算法和卡尔曼滤波跟踪算法;The single tracking algorithm includes an optical flow method, a particle filter tracking algorithm, and a Kalman filter tracking algorithm;所述融合算法为采用多种跟踪算法以提高算法的有效性。The fusion algorithm uses multiple tracking algorithms to improve the effectiveness of the algorithm.
- 一种运动人体跟踪系统,其特征在于,包括:中央控制器单元,声音传感器单元,摄像头单元,运动单元,云台,其中,A sports human body tracking system, comprising: a central controller unit, a sound sensor unit, a camera unit, a motion unit, a pan/tilt, wherein中央控制器单元用于分析声音信号,处理视频信息,控制云台的旋转,计算自主移动机器人的位置和被跟随人的运动轨迹,并向运动单元发出控制命令;The central controller unit is configured to analyze the sound signal, process the video information, control the rotation of the gimbal, calculate the position of the autonomous mobile robot and the motion track of the followed person, and issue a control command to the motion unit;声音传感器单元用于接收声音信号,并向中央控制器单元发送声音信息;The sound sensor unit is configured to receive the sound signal and transmit the sound information to the central controller unit;摄像头单元用于获得自主移动机器人平台所处环境的图像信息,向中央控制器单元发送图像信号;The camera unit is configured to obtain image information of an environment in which the autonomous mobile robot platform is located, and send an image signal to the central controller unit;运动单元用于接收控制命令,并进行运动;The motion unit is configured to receive a control command and perform motion;云台根据中央控制单元的命令进行旋转,调整摄像头拍摄角度。The pan/tilt rotates according to the command of the central control unit to adjust the camera shooting angle.
- 根据权利要求8所述的系统,其特征在于,进一步包括: The system of claim 8 further comprising:自主移动机器人安装有5个声音传感器;The autonomous mobile robot is equipped with five sound sensors;4个声音传感器均布在自主移动机器人外围四周,一个声音传感器位于云台顶部;4 sound sensors are distributed around the periphery of the autonomous mobile robot, and a sound sensor is located at the top of the pan/tilt;5个阵列式声音传感器均为固定式安装,不随云台运动而运动。The five arrayed sound sensors are fixed and do not move with the gimbal movement.
- 根据权利要求8所述的系统,其特征在于,进一步包括:The system of claim 8 further comprising:摄像头单元和中央控制器单元位于自主移动机器人的云台;The camera unit and the central controller unit are located in the head of the autonomous mobile robot;云台可360度自由旋转,保证摄像头处在合适的角度;The pan/tilt can be rotated 360 degrees freely to ensure that the camera is at an appropriate angle;云台位于自主移动机器人的上方。 The Yuntai is located above the autonomous mobile robot.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610073052.0 | 2016-02-02 | ||
CN201610073052.0A CN105760824B (en) | 2016-02-02 | 2016-02-02 | A kind of moving human hand tracking method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017133453A1 true WO2017133453A1 (en) | 2017-08-10 |
Family
ID=56329903
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/071510 WO2017133453A1 (en) | 2016-02-02 | 2017-01-18 | Method and system for tracking moving body |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN105760824B (en) |
WO (1) | WO2017133453A1 (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108872999A (en) * | 2018-04-28 | 2018-11-23 | 苏州赛腾精密电子股份有限公司 | A kind of object identification method, device, identification equipment and storage medium |
CN109318243A (en) * | 2018-12-11 | 2019-02-12 | 珠海市微半导体有限公司 | A kind of audio source tracking system, method and the clean robot of vision robot |
CN109711246A (en) * | 2018-09-30 | 2019-05-03 | 鲁东大学 | A kind of dynamic object recognition methods, computer installation and readable storage medium storing program for executing |
CN111028267A (en) * | 2019-12-25 | 2020-04-17 | 郑州大学 | Monocular vision following system and following method for mobile robot |
CN111127799A (en) * | 2020-01-20 | 2020-05-08 | 南通围界盾智能科技有限公司 | Tracking alarm detector and tracking method of detector |
CN112530267A (en) * | 2020-12-17 | 2021-03-19 | 河北工业大学 | Intelligent mechanical arm teaching method based on computer vision and application |
CN112578909A (en) * | 2020-12-15 | 2021-03-30 | 北京百度网讯科技有限公司 | Equipment interaction method and device |
CN113516481A (en) * | 2021-08-20 | 2021-10-19 | 支付宝(杭州)信息技术有限公司 | Method and device for confirming brushing intention and brushing equipment |
CN113984763A (en) * | 2021-10-28 | 2022-01-28 | 内蒙古大学 | Visual identification-based insect repellent pesticide effect experimental device and method |
CN114972436A (en) * | 2022-06-13 | 2022-08-30 | 西安交通大学 | Method and system for detecting and tracking moving abrasive particles based on time-space domain joint information |
CN116501892A (en) * | 2023-05-06 | 2023-07-28 | 广州番禺职业技术学院 | Training knowledge graph construction method based on automatic following system of Internet of things |
CN118655767A (en) * | 2024-08-19 | 2024-09-17 | 安徽大学 | Sound source information guiding mobile robot tracking control method |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105760824B (en) * | 2016-02-02 | 2019-02-01 | 北京进化者机器人科技有限公司 | A kind of moving human hand tracking method and system |
CN106228576A (en) * | 2016-07-27 | 2016-12-14 | 潘燕 | For processing the system of image for target following |
CN106296731A (en) * | 2016-07-27 | 2017-01-04 | 潘燕 | A kind of target vehicle video frequency following system under complex scene |
CN106295523A (en) * | 2016-08-01 | 2017-01-04 | 马平 | A kind of public arena based on SVM Pedestrian flow detection method |
CN106886746B (en) * | 2016-12-27 | 2020-07-28 | 浙江宇视科技有限公司 | Identification method and back-end server |
CN106934380A (en) * | 2017-03-19 | 2017-07-07 | 北京工业大学 | A kind of indoor pedestrian detection and tracking based on HOG and MeanShift algorithms |
CN107816985B (en) * | 2017-10-31 | 2021-03-05 | 南京阿凡达机器人科技有限公司 | Human body detection device and method |
CN109992008A (en) * | 2017-12-29 | 2019-07-09 | 深圳市优必选科技有限公司 | Target following method and device for robot |
CN108737362B (en) * | 2018-03-21 | 2021-09-14 | 北京猎户星空科技有限公司 | Registration method, device, equipment and storage medium |
CN108762309B (en) * | 2018-05-03 | 2021-05-18 | 浙江工业大学 | Human body target following method based on hypothesis Kalman filtering |
CN110653812B (en) * | 2018-06-29 | 2021-06-04 | 深圳市优必选科技有限公司 | Interaction method of robot, robot and device with storage function |
CN111050271B (en) * | 2018-10-12 | 2021-01-29 | 北京微播视界科技有限公司 | Method and apparatus for processing audio signal |
CN109460031A (en) * | 2018-11-28 | 2019-03-12 | 科大智能机器人技术有限公司 | A kind of system for tracking of the automatic tractor based on human bioequivalence |
CN110309759A (en) * | 2019-06-26 | 2019-10-08 | 深圳市微纳集成电路与系统应用研究院 | Light source control method based on human body image identification |
CN110297472A (en) * | 2019-06-28 | 2019-10-01 | 上海商汤智能科技有限公司 | Apparatus control method, terminal, controlled plant, electronic equipment and storage medium |
CN110457884A (en) * | 2019-08-06 | 2019-11-15 | 北京云迹科技有限公司 | Target follower method, device, robot and read/write memory medium |
CN111650558B (en) * | 2020-04-24 | 2023-10-10 | 平安科技(深圳)有限公司 | Method, device and computer equipment for positioning sound source user |
CN111580049B (en) * | 2020-05-20 | 2023-07-14 | 陕西金蝌蚪智能科技有限公司 | Dynamic target sound source tracking and monitoring method and terminal equipment |
CN112261365A (en) * | 2020-10-19 | 2021-01-22 | 西北工业大学 | Self-contained underwater acousto-optic monitoring and recording device and recording method |
CN112487869B (en) * | 2020-11-06 | 2024-08-23 | 深圳优地科技有限公司 | Robot intersection passing method and device and intelligent equipment |
CN113238552A (en) * | 2021-04-28 | 2021-08-10 | 深圳优地科技有限公司 | Robot, robot movement method, robot movement device and computer-readable storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102348068A (en) * | 2011-08-03 | 2012-02-08 | 东北大学 | Head gesture control-based following remote visual system |
CN103984315A (en) * | 2014-05-15 | 2014-08-13 | 成都百威讯科技有限责任公司 | Domestic multifunctional intelligent robot |
US20150015674A1 (en) * | 2010-10-08 | 2015-01-15 | SoliDDD Corp. | Three-Dimensional Video Production System |
CN104299351A (en) * | 2014-10-22 | 2015-01-21 | 常州大学 | Intelligent early warning and fire extinguishing robot |
CN105094136A (en) * | 2015-09-14 | 2015-11-25 | 桂林电子科技大学 | Adaptive microphone array sound positioning rescue robot and using method thereof |
CN105760824A (en) * | 2016-02-02 | 2016-07-13 | 北京进化者机器人科技有限公司 | Moving body tracking method and system |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105184214B (en) * | 2015-07-20 | 2019-02-01 | 北京进化者机器人科技有限公司 | A kind of human body localization method and system based on auditory localization and Face datection |
CN105234940A (en) * | 2015-10-23 | 2016-01-13 | 上海思依暄机器人科技有限公司 | Robot and control method thereof |
-
2016
- 2016-02-02 CN CN201610073052.0A patent/CN105760824B/en active Active
-
2017
- 2017-01-18 WO PCT/CN2017/071510 patent/WO2017133453A1/en active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150015674A1 (en) * | 2010-10-08 | 2015-01-15 | SoliDDD Corp. | Three-Dimensional Video Production System |
CN102348068A (en) * | 2011-08-03 | 2012-02-08 | 东北大学 | Head gesture control-based following remote visual system |
CN103984315A (en) * | 2014-05-15 | 2014-08-13 | 成都百威讯科技有限责任公司 | Domestic multifunctional intelligent robot |
CN104299351A (en) * | 2014-10-22 | 2015-01-21 | 常州大学 | Intelligent early warning and fire extinguishing robot |
CN105094136A (en) * | 2015-09-14 | 2015-11-25 | 桂林电子科技大学 | Adaptive microphone array sound positioning rescue robot and using method thereof |
CN105760824A (en) * | 2016-02-02 | 2016-07-13 | 北京进化者机器人科技有限公司 | Moving body tracking method and system |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108872999B (en) * | 2018-04-28 | 2022-05-17 | 苏州赛腾精密电子股份有限公司 | Object identification method, device, identification equipment and storage medium |
CN108872999A (en) * | 2018-04-28 | 2018-11-23 | 苏州赛腾精密电子股份有限公司 | A kind of object identification method, device, identification equipment and storage medium |
CN109711246A (en) * | 2018-09-30 | 2019-05-03 | 鲁东大学 | A kind of dynamic object recognition methods, computer installation and readable storage medium storing program for executing |
CN109318243A (en) * | 2018-12-11 | 2019-02-12 | 珠海市微半导体有限公司 | A kind of audio source tracking system, method and the clean robot of vision robot |
CN109318243B (en) * | 2018-12-11 | 2023-07-07 | 珠海一微半导体股份有限公司 | Sound source tracking system and method of vision robot and cleaning robot |
CN111028267A (en) * | 2019-12-25 | 2020-04-17 | 郑州大学 | Monocular vision following system and following method for mobile robot |
CN111028267B (en) * | 2019-12-25 | 2023-04-28 | 郑州大学 | Monocular vision following system and method for mobile robot |
CN111127799A (en) * | 2020-01-20 | 2020-05-08 | 南通围界盾智能科技有限公司 | Tracking alarm detector and tracking method of detector |
CN112578909A (en) * | 2020-12-15 | 2021-03-30 | 北京百度网讯科技有限公司 | Equipment interaction method and device |
CN112578909B (en) * | 2020-12-15 | 2024-05-31 | 北京百度网讯科技有限公司 | Method and device for equipment interaction |
CN112530267A (en) * | 2020-12-17 | 2021-03-19 | 河北工业大学 | Intelligent mechanical arm teaching method based on computer vision and application |
CN113516481A (en) * | 2021-08-20 | 2021-10-19 | 支付宝(杭州)信息技术有限公司 | Method and device for confirming brushing intention and brushing equipment |
CN113516481B (en) * | 2021-08-20 | 2024-05-14 | 支付宝(杭州)信息技术有限公司 | Face brushing willingness confirmation method and device and face brushing equipment |
CN113984763B (en) * | 2021-10-28 | 2024-03-26 | 内蒙古大学 | Insect repellent efficacy experimental device and method based on visual recognition |
CN113984763A (en) * | 2021-10-28 | 2022-01-28 | 内蒙古大学 | Visual identification-based insect repellent pesticide effect experimental device and method |
CN114972436B (en) * | 2022-06-13 | 2024-02-23 | 西安交通大学 | Motion abrasive particle detection tracking method and system based on time-space domain combined information |
CN114972436A (en) * | 2022-06-13 | 2022-08-30 | 西安交通大学 | Method and system for detecting and tracking moving abrasive particles based on time-space domain joint information |
CN116501892A (en) * | 2023-05-06 | 2023-07-28 | 广州番禺职业技术学院 | Training knowledge graph construction method based on automatic following system of Internet of things |
CN116501892B (en) * | 2023-05-06 | 2024-03-29 | 广州番禺职业技术学院 | Training knowledge graph construction method based on automatic following system of Internet of things |
CN118655767A (en) * | 2024-08-19 | 2024-09-17 | 安徽大学 | Sound source information guiding mobile robot tracking control method |
Also Published As
Publication number | Publication date |
---|---|
CN105760824A (en) | 2016-07-13 |
CN105760824B (en) | 2019-02-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017133453A1 (en) | Method and system for tracking moving body | |
US9875648B2 (en) | Methods and systems for reducing false alarms in a robotic device by sensor fusion | |
Linder et al. | On multi-modal people tracking from mobile platforms in very crowded and dynamic environments | |
US10402984B2 (en) | Monitoring | |
Leigh et al. | Person tracking and following with 2d laser scanners | |
Yuan et al. | Multisensor information fusion for people tracking with a mobile robot: A particle filtering approach | |
WO2018111920A1 (en) | System and method for semantic simultaneous localization and mapping of static and dynamic objects | |
Vadakkepat et al. | Improved particle filter in sensor fusion for tracking randomly moving object | |
Derry et al. | Automated doorway detection for assistive shared-control wheelchairs | |
CN108724178B (en) | Method and device for autonomous following of specific person, robot, device and storage medium | |
Monajjemi et al. | UAV, do you see me? Establishing mutual attention between an uninstrumented human and an outdoor UAV in flight | |
Xiao et al. | Human tracking from single RGB-D camera using online learning | |
Wu et al. | Vision-based target detection and tracking system for a quadcopter | |
Ajmera et al. | Autonomous UAV-based target search, tracking and following using reinforcement learning and YOLOFlow | |
Vidal et al. | Slam solution based on particle filter with outliers filtering in dynamic environments | |
Pérez-Cutiño et al. | Event-based human intrusion detection in UAS using deep learning | |
Ciliberto et al. | A heteroscedastic approach to independent motion detection for actuated visual sensors | |
Chaudhary et al. | Controlling a swarm of unmanned aerial vehicles using full-body k-nearest neighbor based action classifier | |
Macesanu et al. | A time-delay control approach for a stereo vision based human-machine interaction system | |
Ishikoori et al. | Semantic position recognition and visual landmark detection with invariant for human effect | |
Nasti et al. | Obstacle avoidance during robot navigation in dynamic environment using fuzzy controller | |
Rizvi et al. | Human detection and localization in indoor disaster environments using uavs | |
Das et al. | Vision based object tracking by mobile robot | |
Tarmizi et al. | Latest trend in person following robot control algorithm: A review | |
Zhang et al. | Research on Visual Image Processing of Mobile Robot Based on OpenCV |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17746773 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 06/12/2018) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17746773 Country of ref document: EP Kind code of ref document: A1 |