WO2017143949A1 - 基于人体信息的机器人监控系统 - Google Patents

基于人体信息的机器人监控系统 Download PDF

Info

Publication number
WO2017143949A1
WO2017143949A1 PCT/CN2017/074048 CN2017074048W WO2017143949A1 WO 2017143949 A1 WO2017143949 A1 WO 2017143949A1 CN 2017074048 W CN2017074048 W CN 2017074048W WO 2017143949 A1 WO2017143949 A1 WO 2017143949A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
human body
frame
body information
unit
Prior art date
Application number
PCT/CN2017/074048
Other languages
English (en)
French (fr)
Inventor
陈明修
Original Assignee
芋头科技(杭州)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 芋头科技(杭州)有限公司 filed Critical 芋头科技(杭州)有限公司
Priority to US15/999,670 priority Critical patent/US11158064B2/en
Publication of WO2017143949A1 publication Critical patent/WO2017143949A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • the invention belongs to the field of robot technology, and in particular relates to a robot monitoring system.
  • the mainstream robot monitoring mainly uses the image acquisition device to monitor 24 hours a day, single angle; in most scenarios, the user will only be interested in the monitoring part with valid information, and continuous monitoring has a large number of static invalid.
  • Monitoring records are not only inconvenient for users to find the monitoring videos they are interested in, but also occupy a large number of invalid static videos, which take up a lot of storage space.
  • the single-angle monitoring information cannot make the monitoring center always follow the content information of the user's interest. Meet the user's monitoring requirements.
  • Robot monitoring system based on human body information including
  • An image acquisition unit for collecting images for collecting images
  • a human body detecting unit is connected to the image collecting unit, and configured to determine whether there is human body information matching the training sample in the image;
  • a target acquiring unit connected to the human body detecting unit, configured to acquire position information and size information of the human body information in the image
  • a target tracking unit connected to the image acquisition unit and the target acquisition unit, acquiring a current motion region according to a frame difference between predetermined image frames in the image, and acquiring a motion target in the motion region;
  • An adjustment unit is coupled to the target tracking unit and the image acquisition unit to adjust an orientation of the image acquisition unit such that the moving target is located at a center of the image.
  • the target tracking unit includes:
  • the frame difference operation unit calculates the acquired image information of the n+2th frame, the n-1st frame, and the nth frame to obtain pixel gradation of the n-1th frame and pixel gradation of the n+2th frame a frame difference between the first difference image and a pixel difference between the pixel gray level of the nth frame and the pixel gray level of the n-2th frame as a second difference image;
  • a motion picture acquiring unit connected to the frame difference operation unit, and obtaining a current motion picture according to the first difference image and the second difference image;
  • the moving target acquiring unit is connected to the motion map acquiring unit, acquires a set of points satisfying the set condition in the motion map, and obtains a moving target according to the set of the points.
  • the adjusting unit includes
  • the determining unit is configured to determine a distance of the moving target from a center of the image, and when the distance is greater than a set threshold, generate an adjustment signal.
  • the above-mentioned human body information-based robot monitoring system further includes a storage unit connected to the image acquisition unit for storing the image.
  • the human body information is face information.
  • a robot monitoring method based on human body information is also provided, which comprises the following steps:
  • Step 1 collecting an image
  • Step 2 determining whether there is human body information matching the training sample in the image, and if not, repeating step 1;
  • Step 3 acquiring location information and size information of the human body information in the image
  • Step 4 acquiring a current motion region according to a frame difference between predetermined image frames in the image, and acquiring a motion target in the motion region;
  • Step 5 adjusting an orientation of the image acquisition unit such that the moving target is located at a center of the image.
  • step 4 is specifically as follows:
  • Step 41 Perform operation on the acquired image information of the n+2th frame, the n-1st frame, and the nth frame to obtain a pixel grayscale of the n-1th frame and a pixel grayscale of the n+2th frame.
  • a frame difference as a first difference image and a frame difference between a pixel gradation of the nth frame and a pixel gradation of the n-2th frame as a second difference image;
  • Step 42 Obtain a current motion map according to the first difference image and the second difference image.
  • Step 43 Obtain a set of points satisfying the set condition in the motion map, each point represents a motion element, and find all connected motion elements as suspected moving targets;
  • Step 44 Calculate an exercise intensity value of each suspected moving target, the exercise intensity value being equal to the number of the motion elements divided by the rectangular area of the suspected moving target, according to the exercise intensity value and the area of the suspected moving target Obtain all effective moving targets at the current time point;
  • Step 45 Select a rectangular frame closest to the target object as the moving target among the effective moving targets.
  • the training sample is obtained by pre-training before the step one.
  • the step 1 to the step 4 are repeated and the loop execution is performed for a certain period of time, the time point of the first pass of the step 2 and the last failure time of the step 2 are selected. Video of the time period between points to a monitoring storage center.
  • the human body information uses face information.
  • the above technical solution detects the moving target by detecting the human body information, so that the monitoring center always tracks the position of the human body, effectively realizes the positioning and tracking of the human body, and monitors the content information of interest.
  • FIG. 1 is a block diagram showing the structure of a system of the present invention
  • FIG. 2 is a structural block diagram of a target tracking unit of the present invention
  • FIG. 3 is a schematic flow chart of the method of the present invention.
  • a human body information based robot monitoring system including,
  • the human body detecting unit 2 is connected to the image collecting unit 1 and configured to determine whether there is human body information matching the training sample in the image;
  • the target obtaining unit 3 is connected to the human body detecting unit 2, and is configured to acquire position information and size information of the human body information in the image;
  • the target tracking unit 5 is connected to the image acquisition unit 1 and the target acquisition unit 3, and acquires a current motion region according to a frame difference between predetermined image frames in the image, and acquires a motion target in the motion region;
  • the adjustment unit 4 is connected to the target tracking unit 5 and the image acquisition unit 1, and adjusts the orientation of the image acquisition unit 1 such that the moving target is located at the center of the image.
  • the target tracking unit 5 includes:
  • the frame difference operation unit 51 performs operation on the acquired image information of the n+2th frame, the n-1th frame, and the nth frame to obtain pixel gradation of the n-1th frame and pixel gradation of the n+2th frame. a frame difference between degrees as a first difference image and a frame difference between a pixel gradation of the nth frame and a pixel gradation of the n-2th frame as a second difference image;
  • the motion picture acquiring unit 52 is connected to the frame difference operation unit 51, and obtains a current motion picture according to the first difference image and the second difference image;
  • the moving target acquisition unit 53 is connected to the motion map acquisition unit 52, acquires a set of points satisfying the set condition in the motion map, and obtains a moving target according to the set of points.
  • the above-mentioned human body information-based robot monitoring system, the adjusting unit 4 includes
  • the determining unit is configured to determine a distance of the moving target from the center of the image, and when the distance is greater than the set threshold, generate an adjustment signal.
  • the above-described human body information-based robot monitoring system further includes a storage unit connected to the image acquisition unit 1 for storing images.
  • the human body information is face information.
  • a robot monitoring method based on human body information is also provided, which comprises the following steps:
  • Step 1 collecting an image
  • Step 2 determining whether there is human body information in the image that matches a training sample, and if not, repeating step 1;
  • Step 3 acquiring location information and size information of the human body information in the image
  • Step 4 acquiring a current motion region according to a frame difference between predetermined image frames in the image, and acquiring a motion target in the motion region;
  • step 5 the orientation of the image acquisition unit 1 is adjusted such that the moving object is located at the center of the image.
  • the invention is directed to the shortcomings of the image acquisition device in the prior art that requires 24 hours a day and single angle monitoring.
  • the human body information By detecting human body information, when the human body information is detected, the person is tracked, so that the monitoring center is always located at the human body position. Effectively realize human body positioning and tracking to monitor content information of interest.
  • step 4 is as follows:
  • Step 41 Perform operation on the acquired image information of the n+2th frame, the n-1st frame, and the nth frame to obtain a pixel grayscale of the n-1th frame and a pixel grayscale of the n+2th frame.
  • the frame difference is used as the first difference image and the frame difference between the pixel gray level of the nth frame and the pixel gray level of the n-2th frame as the second difference map image;
  • Step 42 Obtain a current motion map according to the first difference image and the second difference image
  • Step 43 Obtain a set of points satisfying the set condition in the motion graph, each point represents a motion element, and find all connected motion elements as suspected moving targets;
  • Step 44 Calculate an exercise intensity value of each suspected moving target.
  • the exercise intensity value is equal to the number of the motion element divided by the rectangular area of the suspected moving target, and all effective moving targets at the current time point are obtained according to the exercise intensity value and the area of the suspected moving target. ;
  • Step 45 Select a rectangular frame closest to the target object as the moving target among the effective moving targets.
  • the above target tracking method a specific embodiment: using the position information and the size information obtained in step 3, initializing the position information as tracking start information; then using the frame difference method: first, acquiring the current image frame and the first two Image information acquired by image frames, obtaining a first difference image img1 by subtracting the third image frame from the first image frame, and obtaining a second difference image img2 by subtracting the third image frame from the second image frame, The difference image img1 is superimposed with the second difference image img2 to obtain a motion map motionmap; secondly, a point not equal to 0 in the motion map motionmap represents a motion element, and all connected motion elements are found, and the connected motion elements are composed.
  • the rectangular area of the suspected target is less than a certain threshold, the area is considered not to be a moving area, and Filtered; In this case, all valid get objs moving objects in the current time point, is closest to the selected target object objs these objectives in a rectangular frame, then repeat steps 1.
  • the above step 5 is specifically as follows: judging whether the analysis moving object obj is at the center of the image, and if the moving target obj is larger than a certain distance from the center of the image, the target object center is always in the image by adjusting the robot to the same distance in the moving direction of the target object. center.
  • the above-described human body information-based robot monitoring method obtains training samples by pre-training before step one.
  • the position and size information of the human body information in the image is further acquired, and if the human body information does not exist, the detection of the human body information is continued.
  • the above-mentioned human body information-based robot monitoring method when steps 1 to 4 are repeatedly performed for a certain period of time, the video of the time period between the time point of the first pass of step 2 and the time point of the last failure of step 2 is selected. To a monitoring storage center.
  • the time point at which the first step 2 is judged to pass the process and the time point at which the last step 2 is determined to fail the process are selected and forwarded respectively. Extend the time of 5s-10s backwards and save the video of this time period to the monitoring storage center.
  • Still another alternative is to determine whether the current state requires recording of a surveillance video based on motion information detection.
  • the above-described human body information-based robot monitoring method uses human face information for human body information.
  • the above technical solution can be used for a home intelligent robot, detecting human body information and tracking, thereby judging whether the time point is a suitable monitoring recording time point; and can also be used for an outdoor surveillance camera manufacturer, while recording the 24 hours information throughout the day, The user alerts the monitoring video segment of interest to him.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Manipulator (AREA)

Abstract

本发明属于机器人技术领域,尤其涉及一种机器人监控系统。基于人体信息的机器人监控系统,包括,图像采集单元,用于采集图像;人体检测单元,用于判断图像中是否存在与一训练样本中相匹配的人体信息;目标获取单元,用于获取人体信息在图像中的位置信息及大小信息;目标跟踪单元,依据图像中预定图像帧之间的帧差获取当前的运动区域,并于运动区域中获取运动目标;调整单元,调整图像采集单元的朝向以使得运动目标位于图像的中心。以上技术方案通过检测人体信息以跟踪运动目标,使得监控中心始终跟踪人体位置,有效实现人体定位及跟踪,以对感兴趣的内容信息进行监控。

Description

基于人体信息的机器人监控系统 技术领域
本发明属于机器人技术领域,尤其涉及一种机器人监控系统。
背景技术
目前主流的机器人监控主要是利用图像采集装置全天24小时、单角度的监控;在大部分场景中,用户只会对于存在有效信息的监控部分感兴趣,而连续监控存在着大量静态的无效的监控记录,不仅不方便用户查找自己感兴趣的监控视频,也会占用大量无效静态视频,占用大量存储空间;另外,单角度的监控信息无法让监控中心一直跟随着用户感兴趣的内容信息,无法满足用户的监控要求。
发明内容
针对以上技术问题,提供一种基于人体信息的机器人监控系统及方法,以解决现有技术的缺陷;
具体技术方案如下:
基于人体信息的机器人监控系统,其中,包括,
图像采集单元,用于采集图像;
人体检测单元,与所述图像采集单元连接,用于判断所述图像中是否存在与一训练样本中相匹配的人体信息;
目标获取单元,与所述人体检测单元连接,用于获取所述人体信息在所述图像中的位置信息及大小信息;
目标跟踪单元,与所述图像采集单元和所述目标获取单元连接,依据所述图像中预定图像帧之间的帧差获取当前的运动区域,并于所述运动区域中获取运动目标;
调整单元,与所述目标跟踪单元和所述图像采集单元连接,调整所述图像采集单元的朝向以使得所述运动目标位于所述图像的中心。
上述的基于人体信息的机器人监控系统,所述目标跟踪单元包括:
帧差运算单元,对获取的第n+2帧、第n-1帧和第n帧的图像信息进行运算,以获得第n-1帧的像素灰度与第n+2帧的像素灰度之间的帧差作为第一差值图像及第n帧的像素灰度与第n-2帧的像素灰度之间的帧差作为第二差值图像;
运动图获取单元,与所述帧差运算单元连接,依据所述第一差值图像和所述第二差值图像获得当前运动图;
运动目标获取单元,与所述运动图获取单元连接,于所述运动图中获取满足设定条件的点的集合,依据所述点的集合获得运动目标。
上述的基于人体信息的机器人监控系统,所述调整单元包括,
判断单元,用于判断所述运动目标距离所述图像的中心的距离,当所述距离大于设定阈值时,产生一调整信号。
上述的基于人体信息的机器人监控系统,还包括存储单元,与所述图像采集单元连接,用于存储所述图像。
上述的基于人体信息的机器人监控系统,所述人体信息为人脸信息。
还提供,基于人体信息的机器人监控方法,其中,包括以下步骤:
步骤1,采集图像;
步骤2,判断所述图像中是否存在与一训练样本中相匹配的人体信息,如果否,重复步骤1;
步骤3,获取所述人体信息在所述图像中的位置信息及大小信息;
步骤4,依据所述图像中预定图像帧之间的帧差获取当前的运动区域,并于所述运动区域中获取运动目标;
步骤5,调整所述图像采集单元的朝向以使得所述运动目标位于所述图像的中心。
上述的基于人体信息的机器人监控方法,所述步骤4具体如下:
步骤41,对获取的第n+2帧、第n-1帧和第n帧的图像信息进行运算,以获得第n-1帧的像素灰度与第n+2帧的像素灰度之间的帧差作为第一差值图像及第n帧的像素灰度与第n-2帧的像素灰度之间的帧差作为第二差值图像;
步骤42,依据所述第一差值图像和所述第二差值图像获得当前运动图;
步骤43,于所述运动图中获取满足设定条件的点的集合,每个点表示一个运动元,找出所有相互连接的运动元作为疑似运动目标;
步骤44,计算各个疑似运动目标的运动强度值,所述运动强度值等于所述运动元的数量除以所述疑似运动目标的矩形面积,依据所述运动强度值和所述疑似运动目标的面积获得当前时间点下的所有有效运动目标;
步骤45,于所述有效运动目标中选取与目标对象最接近的矩形框作为运动目标。
上述的基于人体信息的机器人监控方法,所述步骤一之前通过预先训练获得所述训练样本。
上述的基于人体信息的机器人监控方法,所述步骤1至所述步骤4重复循环执行到一定时长时,选取所述步骤2第一次通过的时间点与所述步骤2的最后一次失败的时间点之间的时间段的视频至一监控存储中心。
上述的基于人体信息的机器人监控方法,所述人体信息采用人脸信息。
有益效果:以上技术方案通过检测人体信息以跟踪运动目标,使得监控中心始终跟踪人体位置,有效实现人体定位及跟踪,以对感兴趣的内容信息进行监控。
附图说明
图1为本发明的系统结构框图;
图2为本发明的目标跟踪单元的结构框图;
图3为本发明的方法流程示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动的前提下所获得的所有其他实施例,都属于本发明保护的范围。
需要说明的是,在不冲突的情况下,本发明中的实施例及实施例中的特征可以相互组合。
下面结合附图和具体实施例对本发明作进一步说明,但不作为本发明的限定。
参照图1,基于人体信息的机器人监控系统,其中,包括,
图像采集单元1,用于采集图像;
人体检测单元2,与图像采集单元1连接,用于判断图像中是否存在与一训练样本中相匹配的人体信息;
目标获取单元3,与人体检测单元2连接,用于获取人体信息在图像中的位置信息及大小信息;
目标跟踪单元5,与图像采集单元1和目标获取单元3连接,依据图像中预定图像帧之间的帧差获取当前的运动区域,并于运动区域中获取运动目标;
调整单元4,与目标跟踪单元5和图像采集单元1连接,调整图像采集单元1的朝向以使得运动目标位于图像的中心。
上述的基于人体信息的机器人监控系统,目标跟踪单元5包括:
帧差运算单元51,对获取的第n+2帧、第n-1帧和第n帧的图像信息进行运算,以获得第n-1帧的像素灰度与第n+2帧的像素灰度之间的帧差作为第一差值图像及第n帧的像素灰度与第n-2帧的像素灰度之间的帧差作为第二差值图像;
运动图获取单元52,与帧差运算单元51连接,依据第一差值图像和第二差值图像获得当前运动图;
运动目标获取单元53,与运动图获取单元52连接,于运动图中获取满足设定条件的点的集合,依据点的集合获得运动目标。
上述的基于人体信息的机器人监控系统,调整单元4包括,
判断单元,用于判断运动目标距离图像的中心的距离,当距离大于设定阈值时,产生一调整信号。
上述的基于人体信息的机器人监控系统,还包括存储单元,与图像采集单元1连接,用于存储图像。
上述的基于人体信息的机器人监控系统,人体信息为人脸信息。
还提供,基于人体信息的机器人监控方法,其中,包括以下步骤:
步骤1,采集图像;
步骤2,判断图像中是否存在与一训练样本中相匹配的人体信息,如果否,重复步骤1;
步骤3,获取人体信息在图像中的位置信息及大小信息;
步骤4,依据图像中预定图像帧之间的帧差获取当前的运动区域,并于运动区域中获取运动目标;
步骤5,调整图像采集单元1的朝向以使得运动目标位于图像的中心。
本发明针对现有技术中图像采集装置需要全天24小时、单角度的监控所存在的缺点,通过检测人体信息,当检测到人体信息的时候跟踪该人,使得监控中心始终位于人体位置,可有效实现人体定位及跟踪,以对感兴趣的内容信息进行监控。
上述的基于人体信息的机器人监控方法,步骤4具体如下:
步骤41,对获取的第n+2帧、第n-1帧和第n帧的图像信息进行运算,以获得第n-1帧的像素灰度与第n+2帧的像素灰度之间的帧差作为第一差值图像及第n帧的像素灰度与第n-2帧的像素灰度之间的帧差作为第二差值图 像;
步骤42,依据第一差值图像和第二差值图像获得当前运动图;
步骤43,于运动图中获取满足设定条件的点的集合,每个点表示一个运动元,找出所有相互连接的运动元作为疑似运动目标;
步骤44,计算各个疑似运动目标的运动强度值,运动强度值等于运动元的数量除以疑似运动目标的矩形面积,依据运动强度值和疑似运动目标的面积获得当前时间点下的所有有效运动目标;
步骤45,于有效运动目标中选取与目标对象最接近的矩形框作为运动目标。
上述的目标跟踪方法,一种具体的实施例:以步骤3中获得的位置信息以及大小信息,初始化该位置信息为跟踪起始信息;之后使用帧差法:首先,获取当前图像帧和前面两个图像帧采集的图像信息,用第一图像帧减第三图像帧获得一第一差值图像img1,再用第二图像帧减第三图像帧获得一第二差值图像img2,将第一差值图像img1与第二差值图像img2叠加得到运动图motionmap;其次,在运动图motionmap中不等于0的点表示一个运动元,找到所有相互连接的运动元,这些相互连接的运动元则组成一个疑似运动目标simobjs;最后,计算所有疑似运动目标的运动强度dev,运动强度dev=运动元数量/疑似目标的矩形面积,当运动强度值越大表示这个区域的运动信息越丰富,反之越稀疏,另外当疑似目标的矩形面积小于一定阈值的情况下则认为该区域不是一个运动区域,并将其过滤;此时,就得到了当前时间点下的所有有效运动目标objs,在这些目标中选取与该目标物体最接近的objs矩形框,重复进行步骤1的操作。
上述的步骤5中具体如下:判断分析运动目标obj是否处于图像中心,若运动目标obj离开图像中心大于一定距离后,通过让机器人朝目标物体运动方向调整相同的距离,保持目标物体中心始终处于图像中心。
上述的基于人体信息的机器人监控方法,步骤一之前通过预先训练获得训练样本。
当判断获得的图像中存在与训练样本中相匹配的人体信息时,进一步获取人体信息在图像中的位置以及大小信息,如果不存在人体信息,则持续进行人体信息的检测。
上述的基于人体信息的机器人监控方法,步骤1至步骤4重复循环执行到一定时长时,选取步骤2第一次通过的时间点与步骤2的最后一次失败的时间点之间的时间段的视频至一监控存储中心。
具体地,如果上述步骤重复循环到一定时长,则认为该时间点存在人体运动信息,选取第一次步骤2判断过程通过的时间点与最后一次步骤2判断过程失败的时间点并分别向前、向后延长5s-10s的时间,保存这个时间段的视屏到监控存储中心。
还有一种可替代的方案是根据运动信息检测来判断当前状态是否需要记录监控视频。
上述的基于人体信息的机器人监控方法,人体信息采用人脸信息。
以上技术方案可用于家庭智能机器人,检测人体信息并跟踪,从而判断该时间点是否为合适的监控记录时间点;也可以用于户外监控摄像头厂商,在记录全天24小时信息的过程同时,给用户提醒其感兴趣的监控视频段。
以上仅为本发明较佳的实施例,并非因此限制本发明的实施方式及保护 范围,对于本领域技术人员而言,应当能够意识到凡运用本发明说明书及图示内容所作出的等同替换和显而易见的变化所得到的方案,均应当包含在本发明的保护范围内。

Claims (10)

  1. 基于人体信息的机器人监控系统,其特征在于,包括,
    图像采集单元,用于采集图像;
    人体检测单元,与所述图像采集单元连接,用于判断所述图像中是否存在与一训练样本中相匹配的人体信息;
    目标获取单元,与所述人体检测单元连接,用于获取所述人体信息在所述图像中的位置信息及大小信息;
    目标跟踪单元,与所述图像采集单元和所述目标获取单元连接,依据所述图像中预定图像帧之间的帧差获取当前的运动区域,并于所述运动区域中获取运动目标;
    调整单元,与所述目标跟踪单元和所述图像采集单元连接,调整所述图像采集单元的朝向以使得所述运动目标位于所述图像的中心。
  2. 根据权利要求1所述的基于人体信息的机器人监控系统,其特征在于,所述目标跟踪单元包括:
    帧差运算单元,对获取的第n+2帧、第n-1帧和第n帧的图像信息进行运算,以获得第n-1帧的像素灰度与第n+2帧的像素灰度之间的帧差作为第一差值图像及第n帧的像素灰度与第n-2帧的像素灰度之间的帧差作为第二差值图像;
    运动图获取单元,与所述帧差运算单元连接,依据所述第一差值图像和所述第二差值图像获得当前运动图;
    运动目标获取单元,与所述运动图获取单元连接,于所述运动图中获取满足设定条件的点的集合,依据所述点的集合获得运动目标。
  3. 根据权利要求1所述的基于人体信息的机器人监控系统,其特征在于,所述调整单元包括,
    判断单元,用于判断所述运动目标距离所述图像的中心的距离,当所述距离大于设定阈值时,产生一调整信号。
  4. 根据权利要求1所述的基于人体信息的机器人监控系统,其特征在于,还包括存储单元,与所述图像采集单元连接,用于存储所述图像。
  5. 根据权利要求1所述的基于人体信息的机器人监控系统,其特征在于,所述人体信息为人脸信息。
  6. 基于人体信息的机器人监控方法,其特征在于,包括以下步骤:
    步骤1,采集图像;
    步骤2,判断所述图像中是否存在与一训练样本中相匹配的人体信息,如果否,重复步骤1;
    步骤3,获取所述人体信息在所述图像中的位置信息及大小信息;
    步骤4,依据所述图像中预定图像帧之间的帧差获取当前的运动区域,并于所述运动区域中获取运动目标;
    步骤5,调整所述图像采集单元的朝向以使得所述运动目标位于所述图像的中心。
  7. 根据权利要求6所述的基于人体信息的机器人监控方法,其特征在于,所述步骤4具体如下:
    步骤41,对获取的第n+2帧、第n-1帧和第n帧的图像信息进行运算,以获得第n-1帧的像素灰度与第n+2帧的像素灰度之间的帧差作为第一差值图像及第n帧的像素灰度与第n-2帧的像素灰度之间的帧差作为第二差值图 像;
    步骤42,依据所述第一差值图像和所述第二差值图像获得当前运动图;
    步骤43,于所述运动图中获取满足设定条件的点的集合,每个点表示一个运动元,找出所有相互连接的运动元作为疑似运动目标;
    步骤44,计算各个疑似运动目标的运动强度值,所述运动强度值等于所述运动元的数量除以所述疑似运动目标的矩形面积,依据所述运动强度值和所述疑似运动目标的面积获得当前时间点下的所有有效运动目标;
    步骤45,于所述有效运动目标中选取与目标对象最接近的矩形框作为运动目标。
  8. 根据权利要求6所述的基于人体信息的机器人监控方法,其特征在于,所述步骤一之前通过预先训练获得所述训练样本。
  9. 根据权利要求6所述的基于人体信息的机器人监控方法,其特征在于,所述步骤1至所述步骤4重复循环执行到一定时长时,选取所述步骤2第一次通过的时间点与所述步骤2的最后一次失败的时间点之间的时间段的视频至一监控存储中心。
  10. 根据权利要求6所述的基于人体信息的机器人监控方法,其特征在于,所述人体信息采用人脸信息。
PCT/CN2017/074048 2016-02-23 2017-02-20 基于人体信息的机器人监控系统 WO2017143949A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/999,670 US11158064B2 (en) 2016-02-23 2017-02-20 Robot monitoring system based on human body information

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610099509.5A CN107105193B (zh) 2016-02-23 2016-02-23 基于人体信息的机器人监控系统
CN201610099509.5 2016-02-23

Publications (1)

Publication Number Publication Date
WO2017143949A1 true WO2017143949A1 (zh) 2017-08-31

Family

ID=59658324

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/074048 WO2017143949A1 (zh) 2016-02-23 2017-02-20 基于人体信息的机器人监控系统

Country Status (4)

Country Link
US (1) US11158064B2 (zh)
CN (1) CN107105193B (zh)
TW (1) TWI615026B (zh)
WO (1) WO2017143949A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116453062A (zh) * 2023-06-12 2023-07-18 青岛义龙包装机械有限公司 基于机器人高精度柔顺装配的包装机装配风险监控方法

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107621694A (zh) * 2017-09-15 2018-01-23 长春市求非光学科技有限公司 电子式天文望远镜跟踪星体的方法及电子式天文望远镜
CN112528817B (zh) * 2020-12-04 2024-03-19 重庆大学 一种基于神经网络的巡检机器人视觉检测及跟踪方法
CN112822470A (zh) * 2020-12-31 2021-05-18 济南景雄影音科技有限公司 基于人体图像跟踪的投影互动系统及方法
CN115314717B (zh) * 2022-10-12 2022-12-20 深流微智能科技(深圳)有限公司 视频传输方法、装置、电子设备和计算机可读存储介质
CN117061708A (zh) * 2023-09-15 2023-11-14 威海嘉瑞光电科技股份有限公司 一种智能家居摄像头控制方法、系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101893894A (zh) * 2010-06-30 2010-11-24 上海交通大学 可重构微型移动机器人集群定位跟踪系统
CN102096927A (zh) * 2011-01-26 2011-06-15 北京林业大学 自主林业机器人目标跟踪方法
CN102456225A (zh) * 2010-10-22 2012-05-16 深圳中兴力维技术有限公司 一种视频监控系统及其运动目标检测与跟踪方法
US20130155226A1 (en) * 2011-12-19 2013-06-20 Electronics And Telecommunications Research Institute Object tracking system using robot and object tracking method using a robot
CN104751483A (zh) * 2015-03-05 2015-07-01 北京农业信息技术研究中心 一种仓储物流机器人工作区域异常情况的监控方法

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1178467C (zh) * 1998-04-16 2004-12-01 三星电子株式会社 自动跟踪运动目标的方法和装置
WO2001069930A1 (en) * 2000-03-10 2001-09-20 Sensormatic Electronics Corporation Method and apparatus for object surveillance with a movable camera
US7385626B2 (en) * 2002-10-21 2008-06-10 Sarnoff Corporation Method and system for performing surveillance
TW200713137A (en) * 2005-09-28 2007-04-01 chong-de Liao Multi-functional monitoring system
CN100499808C (zh) * 2007-02-07 2009-06-10 北京航空航天大学 基于目标的图像位置自动控制摄像机运动的方法
CN100508599C (zh) * 2007-04-24 2009-07-01 北京中星微电子有限公司 视频监控中的自动跟踪控制方法和控制装置
CN101211411B (zh) * 2007-12-21 2010-06-16 北京中星微电子有限公司 一种人体检测的方法和装置
AU2009236675A1 (en) * 2008-04-14 2009-10-22 Gvbb Holdings S.A.R.L. Technique for automatically tracking an object
JP4670943B2 (ja) * 2008-11-27 2011-04-13 ソニー株式会社 監視装置、及び妨害検知方法
TWI365662B (en) * 2008-12-03 2012-06-01 Inst Information Industry Method and system for digital image stabilization and computer program product using the method thereof
CN101547344B (zh) * 2009-04-24 2010-09-01 清华大学深圳研究生院 基于联动摄像机的视频监控装置及其跟踪记录方法
CN101814242A (zh) * 2010-04-13 2010-08-25 天津师范大学 教师讲课的运动目标实时跟踪录课装置
CN103024344A (zh) * 2011-09-20 2013-04-03 佳都新太科技股份有限公司 一种基于粒子滤波的ptz自动跟踪目标的方法
CN103259962B (zh) * 2013-04-17 2016-02-17 深圳市捷顺科技实业股份有限公司 一种目标追踪方法和相关装置
WO2016126297A2 (en) * 2014-12-24 2016-08-11 Irobot Corporation Mobile security robot
CN104751486B (zh) * 2015-03-20 2017-07-11 安徽大学 一种多ptz相机的运动目标接力跟踪算法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101893894A (zh) * 2010-06-30 2010-11-24 上海交通大学 可重构微型移动机器人集群定位跟踪系统
CN102456225A (zh) * 2010-10-22 2012-05-16 深圳中兴力维技术有限公司 一种视频监控系统及其运动目标检测与跟踪方法
CN102096927A (zh) * 2011-01-26 2011-06-15 北京林业大学 自主林业机器人目标跟踪方法
US20130155226A1 (en) * 2011-12-19 2013-06-20 Electronics And Telecommunications Research Institute Object tracking system using robot and object tracking method using a robot
CN104751483A (zh) * 2015-03-05 2015-07-01 北京农业信息技术研究中心 一种仓储物流机器人工作区域异常情况的监控方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116453062A (zh) * 2023-06-12 2023-07-18 青岛义龙包装机械有限公司 基于机器人高精度柔顺装配的包装机装配风险监控方法
CN116453062B (zh) * 2023-06-12 2023-08-22 青岛义龙包装机械有限公司 基于机器人高精度柔顺装配的包装机装配风险监控方法

Also Published As

Publication number Publication date
TW201731280A (zh) 2017-09-01
CN107105193B (zh) 2020-03-20
US11158064B2 (en) 2021-10-26
TWI615026B (zh) 2018-02-11
CN107105193A (zh) 2017-08-29
US20210209769A1 (en) 2021-07-08

Similar Documents

Publication Publication Date Title
WO2017143949A1 (zh) 基于人体信息的机器人监控系统
WO2020173226A1 (zh) 一种时空行为检测方法
JP6877630B2 (ja) アクションを検出する方法及びシステム
Haque et al. Heartbeat rate measurement from facial video
US20170277200A1 (en) Method for controlling unmanned aerial vehicle to follow face rotation and device thereof
KR101548834B1 (ko) 클라우드 보조 증강 현실을 위한 적응적 프레임워크
CN108038837B (zh) 视频中目标检测方法及系统
WO2004111687A3 (en) Target orientation estimation using depth sensing
CN109376601B (zh) 基于高速球的物体跟踪方法、监控服务器、视频监控系统
CN107862240B (zh) 一种多摄像头协同的人脸追踪方法
KR20170074076A (ko) 능동형 교통 신호 제어 방법 및 그 시스템
US20110115920A1 (en) Multi-state target tracking mehtod and system
WO2006037057A2 (en) View handling in video surveillance systems
CN108737785B (zh) 基于tof 3d摄像机的室内跌倒自动检测系统
Tawari et al. Attention estimation by simultaneous analysis of viewer and view
CN108965713A (zh) 图像采集方法、装置以及计算机可读存储介质
JP2010140425A (ja) 画像処理システム
CN110378183B (zh) 图像解析装置、图像解析方法及记录介质
CN111627049A (zh) 高空抛物的确定方法、装置、存储介质及处理器
JP2011188332A5 (ja) 画像揺れ補正装置および画像揺れ補正方法
WO2013132711A1 (ja) 物体検知装置、物体検知方法及びプログラム
US10708600B2 (en) Region of interest determination in video
CN109460077B (zh) 一种自动跟踪方法、自动跟踪设备及自动跟踪系统
CN106897984A (zh) 一种面向静态相机的非线性背景模型更新方法
Petrovic et al. Dynamic image fusion performance evaluation

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17755778

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17755778

Country of ref document: EP

Kind code of ref document: A1