WO2016074169A1 - 一种对目标物体的检测方法、检测装置以及机器人 - Google Patents

一种对目标物体的检测方法、检测装置以及机器人 Download PDF

Info

Publication number
WO2016074169A1
WO2016074169A1 PCT/CN2014/090907 CN2014090907W WO2016074169A1 WO 2016074169 A1 WO2016074169 A1 WO 2016074169A1 CN 2014090907 W CN2014090907 W CN 2014090907W WO 2016074169 A1 WO2016074169 A1 WO 2016074169A1
Authority
WO
WIPO (PCT)
Prior art keywords
target object
state information
information
image
preset
Prior art date
Application number
PCT/CN2014/090907
Other languages
English (en)
French (fr)
Inventor
魏基栋
林任
陈晟洋
张华森
任军
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN201480021249.9A priority Critical patent/CN105518702B/zh
Priority to PCT/CN2014/090907 priority patent/WO2016074169A1/zh
Priority to JP2016558217A priority patent/JP6310093B2/ja
Publication of WO2016074169A1 publication Critical patent/WO2016074169A1/zh
Priority to US15/593,559 priority patent/US10551854B2/en
Priority to US16/773,011 priority patent/US11392146B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/18Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form
    • G05B19/406Numerical control [NC], i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by monitoring or safety
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0094Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/12Target-seeking control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/188Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/20Pc systems
    • G05B2219/24Pc safety
    • G05B2219/24097Camera monitors controlled machine
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40577Multisensor object recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • the present invention relates to the field of computer control technologies, and in particular, to a method, a detecting device and a robot for detecting a target object.
  • the target tracking technology based on video images combines technologies in the fields of vision, pattern recognition and artificial intelligence.
  • the main applications of the technology include tracking, identification, security monitoring, and artificial intelligence robots.
  • Video image based tracking in the prior art can basically determine a certain target object from the image more accurately. However, in some cases, it is only determined from the video image that the target object cannot meet the intelligent requirements. For example, in the field of artificial intelligence robots, especially in the field of intelligent robots, it is necessary to further improve the existing target tracking technology.
  • the embodiment of the invention provides a method for detecting a target object, a detecting device and a robot, which can satisfy the user's automatic and intelligent requirements for object tracking and moving state estimation.
  • Embodiments of the present invention provide a method for detecting a target object, including:
  • the detecting device identifies the target object in the monitored area, and calculates first state information of the target object relative to the detecting device;
  • a processing operation on the target object is performed based on the estimated second state information.
  • the first state information includes: first location information, speed information, and moving direction information of the target object with respect to the detecting device;
  • the detecting device calculates a displacement of the target object after a preset delay time value according to the speed information
  • the second location information is determined as the second state information.
  • the identifying the target object in the monitored area includes:
  • the detecting device acquires an image of the detected area
  • An analysis identifies whether the image includes a target object having a specified color feature
  • the identified target object is determined as the target object.
  • the analyzing identifies whether the image includes a target object having a specified color feature, including:
  • each area obtained after the combination is filtered, and the filtered area is used as a target object.
  • performing the connectivity area detection on the area corresponding to the initial object area in the binary image including:
  • the connected area is detected in the area corresponding to the initial target area in the binary map after the noise point is filtered out.
  • the preset-based merge rule performs a merge operation on the determined contours, including:
  • the performing the merge-based merge rule performs a merge operation on the determined contours, including:
  • the calculating the first state information of the target object relative to the detecting device comprises:
  • the preset distance threshold calculates the movement state information of the target object relative to the detecting device again according to the monitored new image, and determine again whether the distance value of the target object to the detecting device is less than a preset value.
  • the distance threshold is repeatedly executed until the movement state information when the distance threshold is not greater than the preset distance threshold is determined;
  • the movement state information is determined as the first state information.
  • calculating the movement state information of the target object relative to the detecting device includes:
  • the determined speed information, direction information, and the movement position information are used as movement state information of the target object.
  • the method further includes:
  • the detecting device performs position estimation on each object to be monitored based on the movement state information of each object, and associates each object according to the obtained position estimation value and the actual position of each object to be monitored in the image.
  • the performing the processing operation on the target object according to the estimated second state information includes:
  • an embodiment of the present invention further provides a detecting apparatus, including:
  • An identification module configured to identify a target object in the monitored area, and calculate first state information of the target object relative to the detecting device;
  • a processing module configured to estimate, according to the first state information, second state information after the target object has experienced a preset delay time value
  • control module configured to perform a processing operation on the target object according to the estimated second state information.
  • the first state information includes: first location information, speed information, and moving direction information of the target object with respect to the detecting device;
  • the processing module is configured to calculate a displacement of the target object after the preset delay time value according to the speed information; and estimate the location according to the first position information, the moving direction information, and the calculated displacement Determining second position information after the target object is moved; determining the second position information as second state information.
  • the identifying module includes:
  • An acquiring unit configured to acquire an image of the detected area
  • An identification unit configured to analyze whether the target object having the specified color feature is included in the image
  • a determining unit configured to determine the identified target object as the target object when the recognition result of the identification unit is included.
  • the identifying unit is configured to perform color detection on the acquired image based on a preset color interval preset, and determine an initial target region having a specified color feature in the image;
  • the image is quantized into a binary image; the connected region is detected in the region corresponding to the initial object region in the binary image, and the outline of the initial target region is determined; and the determined contours are merged based on the preset merge rule. Operation; according to the preset filtering shape or filtering size information, each area obtained after the combination is filtered, and the filtered area is taken as the target object.
  • the identifying unit is configured to filter out the noise point in the binary image when detecting the connected area in the area corresponding to the initial target area in the binary image;
  • the connected area detection is performed on the area corresponding to the initial target area in the binary map after the noise point.
  • the identifying unit when performing the merging operation on the determined contours based on the preset merging rules, is specifically configured to calculate adjacent ones according to the determined edge position coordinates of the respective contour contours.
  • the distance between two contours; the color similarity between two adjacent contours; the two adjacent contours whose distance values, similarity meets the preset merge distance, and the similarity requirements are combined.
  • the identifying unit is configured to detect whether an area between two adjacent connected areas conforms to a preset when performing a combining operation on the determined contours based on the preset merge rule.
  • the occlusion object feature if so, the adjacent two connected regions satisfy a preset merging rule, and the adjacent two contour profiles are merged.
  • the identifying module further includes:
  • a state calculation unit configured to calculate, according to the monitored image, movement state information of the target object relative to the detecting device
  • a distance calculation unit configured to calculate a distance value of the target object to the detecting device
  • a state processing unit configured to calculate, according to the monitored new image, the movement state information of the target object relative to the detecting device, and determine the target object to the detecting device again if the threshold value is greater than the preset distance threshold If the distance value is less than the preset distance threshold, the step is repeatedly performed until the mobile state information when the distance threshold is not greater than the preset distance threshold is determined;
  • the state determining unit is configured to determine the moving state information as the first state information if it is not greater than a preset distance threshold.
  • the state calculation unit or the state processing unit is configured to determine a target object from the image acquired at the current time when calculating the movement state information of the target object relative to the detection device.
  • Pixel coordinates, performing coordinate mapping conversion obtaining initial position information of the target object in a coordinate system centered on the detecting device; determining pixel coordinates of the target object from images acquired at a time after the preset time interval Performing coordinate mapping conversion to obtain moving position information in a coordinate system centered on the detecting device by the target object; determining a speed of the target object according to the initial position information, the moving position information, and the preset time interval.
  • Information and direction information; the determined speed information, direction information, and the movement position information are used as movement state information of the target object.
  • the device further includes:
  • the distinguishing module is configured to perform position estimation on each object to be monitored based on the moving state information of each object, and associate each object according to the obtained position estimation value and the actual position of each object to be monitored in the image.
  • control module is configured to: adjust the rotation parameter of the gimbal according to the second position information in the estimated second state information and the direction relative to the detecting device, so as to load the gimbal Aim at the target object.
  • an embodiment of the present invention further provides a robot, including: an image collection device and a processor, where:
  • the image capture device is configured to capture an image of a detected area
  • the processor is configured to identify a target object in the monitored area according to the image captured by the image capturing device, and calculate first state information of the target object relative to the detecting device; according to the first state information, Estimating second state information of the target object after undergoing a preset delay time value; performing a processing operation on the target object according to the estimated second state information.
  • the embodiment of the invention can realize the recognition of the target object and the estimation of the moving position and the like, can quickly and accurately complete the tracking and state estimation of the target object, adds a new function, and satisfies the user's object tracking and movement. Automated, intelligent requirements for state estimation.
  • FIG. 1 is a schematic flow chart of a method for detecting a target object according to an embodiment of the present invention
  • FIG. 2 is a schematic flow chart of another method for detecting a target object according to an embodiment of the present invention.
  • FIG. 3 is a schematic flow chart of a method for identifying a target object having a specific color feature according to an embodiment of the present invention
  • FIG. 4 is a schematic flow chart of a method for calculating a mobile state according to an embodiment of the present invention.
  • Figure 5 is a schematic diagram of state estimation of a target object
  • FIG. 6 is a schematic structural diagram of a detecting device according to an embodiment of the present invention.
  • Figure 7 is a schematic structural view of one of the identification modules of Figure 6;
  • FIG. 8 is a schematic structural view of a robot according to an embodiment of the present invention.
  • the moving state of the target object at the next moment can be estimated based on the current moving state of the target object, and the target object tracking and predicting function can be realized.
  • the pixel coordinates in the image can be converted into actual position coordinates, and then the motion speed and direction are calculated based on the time interval and the displacement of the actual position coordinates of the time interval, and the movement state including the position, the movement speed, and the direction is determined.
  • FIG. 1 is a schematic flowchart of a method for detecting a target object according to an embodiment of the present invention.
  • the method of the embodiment of the present invention can be applied to an object tracking device, and is implemented by a detecting device, such as a competitive robot.
  • the device is implemented.
  • the method includes:
  • the detecting device identifies the target object in the monitored area, and calculates first state information of the target object relative to the detecting device.
  • a module such as a camera may be called to capture an image of the monitored area, and based on the preset color and/or contour shape, size, and the like of the target object, whether or not the target object is included in the area at the current time is recognized. If included, the calculation of the first state information is performed, and if not, the camera angle or motion detection device continues to be converted, and the image is taken to find the target object.
  • This camera is a calibrated camera.
  • the first state information of the target object relative to the detecting device may be calculated, and the first state information may specifically include: a position, a moving speed of the target object relative to the detecting device, Direction and other information. The details can also be calculated based on the captured image including the target object.
  • the calculation of the position of the target object relative to the detecting device may be: determining pixel position coordinates of the region where the target object is located in the image; mapping pixel coordinates into a coordinate system centered on the detecting device, obtaining the a position coordinate of the target object in a coordinate system centered on the detecting device, the position coordinate is the position information of the target object relative to the detecting device, and it should be noted that the pixel position coordinate and the detecting device are relative to the detecting device.
  • the position coordinate may be only the position coordinate of the geometric center point of the target object, and the coordinate system centered on the detecting device may refer to the origin of the coordinate system of the camera of the detecting device.
  • the next time (for example, the time after 1 second) can be calculated, and the new relative position of the target object can obtain the displacement of the target object based on the relative position of the new relative position and the previous time. And the relative movement direction, according to the displacement and time interval, then the current movement speed can be calculated.
  • S102 Estimate, according to the first state information, second state information after the target object has experienced a preset delay time value.
  • the estimated second state information mainly includes a position and a direction of the target object relative to the detecting device.
  • the preset delay duration may be a preset time interval.
  • the preset delay duration is adjusted according to the estimated duration in the S102, the pan/tilt, and the like for the direction adjustment.
  • the duration and the loading and/or launching duration of the athletic shells are neutralized, wherein the estimated duration, the length of the adjustment, and the length of the loading and/or launch are calculated over a substantial amount of actual duration calculations.
  • the processing operation on the target object can be performed according to actual application requirements.
  • the direction angle of the device such as the pan/tilt may be adjusted according to the relative direction information in the second state information to point to the position in the second state information, and the athletic projectile is fired to strike the target. object.
  • the embodiment of the invention can realize the recognition of the target object and the estimation of the moving position and the like, can quickly and accurately complete the tracking and state estimation of the target object, adds a new function, and satisfies the user's object tracking and movement. Automated, intelligent requirements for state estimation.
  • FIG. 2 is a schematic flowchart diagram of another method for detecting a target object according to an embodiment of the present invention.
  • the method of the embodiment of the present invention may be applied to an object tracking device, which is implemented by a detecting device, such as a competitive robot.
  • the device is implemented.
  • the method includes:
  • the detecting device acquires an image of the detected area.
  • an image obtained in the current environment is captured by calling a camera configured in the detecting device.
  • S202 The analysis identifies whether the image includes a target object having a specified color feature.
  • the image recognition and recognition based on the color recognition strategy can easily and quickly identify the target object in the image.
  • the image data of the target object may be collected in a large amount in advance, and training may be performed to obtain a color interval that may exist in the target object, and then the color image detected in the S201 is color-detected by using the color interval as a threshold to find the suspected target.
  • the area of the object that is, the target area with the specified color feature.
  • the analysis and identification of the image acquired in S201 can be referred to the description in the corresponding embodiment of FIG. 3. If an area suspected of being the target object is found, S203 described below is performed, otherwise, S201 and S202 are re-executed until an area suspected of being the target object is found in the corresponding image.
  • the first state information includes first location information, speed information, and moving direction information of the target object with respect to the detecting device.
  • the position information therein can be obtained by mapping the image pixel coordinates to the actual coordinates centered on the camera of the detecting device, and the speed and the moving direction can be calculated according to the position difference of a certain time interval.
  • the above-mentioned calibration of the camera means: establishing a transformation relationship between the image coordinate system and the actual spatial coordinate system to determine a point in the image.
  • the specific calibration method is to place a unique calibration object in a certain scene and measure its position relative to the camera. By collecting a large amount of image data, the transformation relationship between the two coordinate systems is calculated. After the transformation relationship between the coordinate systems is determined, the actual space coordinates of the target object can be determined by the image coordinates.
  • a point in the image coordinate system can be calculated based on the calculated transformation relationship (rotation and translation, etc.) in real space. It should be noted that, depending on the monocular camera, the position of a certain point in the image coordinate in the real space cannot be accurately restored. In the embodiment of the present invention, the height information can be neglected when using the monocular camera, so that a relative position can be obtained. Accurate transformation relationship.
  • the calculation of the first state information may refer to the description about the mobile state information acquisition in the corresponding embodiment of FIG. 4.
  • S205 Calculate a displacement of the target object after the preset delay duration value according to the speed information.
  • S206 Estimate the second position information after the moving of the target object according to the first position information, the movement direction information, and the calculated displacement.
  • S207 Determine the second location information as the second state information.
  • the displacement can be obtained according to the product of the velocity and the duration, and the product can be combined with the direction of motion to comprehensively determine the exact position of the target object in the coordinate system centered on the camera of the detecting device after the delay duration value.
  • the distance of the target object from the detecting device may also be determined first, and the distance value may also be calculated by using a coordinate system centered on the camera of the detecting device.
  • the process S205 is performed; otherwise, the steps S201 to S204 are continued.
  • the S208 may specifically include: adjusting the rotation parameter of the pan-tilt according to the second position information in the estimated second state information and the direction of the detecting device, so that the pan-tilt is hung The loaded load is aimed at the target object.
  • FIG. 3 is a schematic flowchart of a method for identifying a target object having a specific color feature according to an embodiment of the present invention.
  • the method corresponds to the foregoing S202, and specifically includes:
  • S301 Perform color detection on the acquired image based on a preset color interval preset, and determine an initial target region in the image having a specified color feature;
  • the image data of the plurality of target objects may be collected and trained to obtain a color interval of the target object, and the acquired image is color-detected in the color interval in the S301, and the region of the image in which the target object is suspected is found, and Quantize into a binary map.
  • S303 Perform a connected area detection on an area corresponding to the initial target area in the binary image, and determine an outline of the initial target area;
  • the detecting the connected area in the area corresponding to the initial target area in the binary image includes: filtering out the noise point in the binary image; in the binary image after filtering the noise point The area corresponding to the initial target area performs connected area detection.
  • the obtained binary image can be subjected to an open operation process to filter out noise points in the color detection result.
  • the system detects a target object by a color recognition method, other objects that are consistent or similar to the color of the target object may exist in the environment, or a corresponding color region is generated in the image due to illumination.
  • filtering through color information it is possible to retain a color area that is not the target, so it is filtered out.
  • filtering noise it is possible to analyze the actual data and find that the size and shape of the noise area are very different from the target vehicle. Therefore, it can be limited by size and shape (length to width ratio, perimeter area ratio, etc.) ) Filter out noise points.
  • S304 Perform a merge operation on each of the determined contours based on a preset merge rule
  • the S304 may specifically include: calculating a distance between two adjacent contour profiles according to the determined edge position coordinates of each contour contour; determining a color similarity between the adjacent two contour contours; The two adjacent contours of the distance value, the similarity satisfying the preset merge distance, and the similarity requirement are combined.
  • the connected area detection is performed on the suspected area of the image, and the outer contour of each suspected area is calculated, which can be represented by a rectangular approximation. According to the distance between the connected areas and the color similarity, the connected areas with close distances and high similarity are combined.
  • the S304 may specifically include: detecting whether an area between two adjacent connected areas meets a preset occlusion object feature; if yes, the adjacent two connected areas satisfy a preset merge rule, and The two adjacent contours are merged.
  • S305 Filter each area obtained after the combination according to the preset filtering shape or the filtering size information, and use the filtered area as a target object.
  • the shape obtained after the contour contouring operation may be just objects such as pillars in some environments, and these objects need to be filtered.
  • a large number of images of the target object may be trained in advance to obtain a shape (for example, a length to width ratio) and/or a size information (area) of the target object, and the non-compliant area is filtered out based on the basis.
  • the graphic object about the target object in the image can be basically determined, and then the tracking of the target object and the detection of the position, the speed and the moving direction are performed.
  • FIG. 4 it is a schematic flowchart of a method for calculating a mobile state according to an embodiment of the present invention.
  • the method of the embodiment of the present invention specifically includes:
  • S401 Calculate, according to the monitored image, movement state information of the target object relative to the detecting device;
  • S402 Calculate a distance value of the target object to the detecting device
  • S403 If it is greater than the preset distance threshold, calculate the movement state information of the target object relative to the detecting device according to the new image again, and determine again whether the distance value of the target object to the detecting device is less than a preset value.
  • the distance threshold is repeatedly executed until it is determined that the movement state information is not greater than the preset distance threshold; wherein each time a new image is acquired after determining the distance, the image may be executed after a certain time interval has elapsed.
  • S404 Determine the mobile state information as the first state information if it is not greater than the preset distance threshold.
  • the target object When the target object is far away from the detecting device, it may not be able to perform a corresponding processing operation on it, for example, in the competitive robot, the aiming strike cannot be performed. Therefore, when the distance is relatively close, the corresponding mobile state information is determined as the first state information for subsequent determination of the second state information.
  • the continuously monitored moving state information of the S403 needs to be updated until the distance of the target object to the detecting device is not greater than a preset distance threshold.
  • a control signal can be generated according to the detected moving state information (position, direction, etc.) of the target object, so that the detecting device can The target object moves.
  • the step of calculating the movement state information of the target object relative to the detecting device may specifically include:
  • the determined speed information, direction information, and the movement position information are used as movement state information of the target object.
  • the detecting device performs position estimation on each object to be monitored based on the movement state information of each object, and determines each object according to the obtained position estimation value and the actual position of each object to be monitored in the image. . That is to say, the detecting device can estimate the moving state (new position, speed, direction) of the target object at the current time according to the moving state (position, speed, direction) of the target object at the previous moment, and then according to the position and color information, The estimated state of the target object at the current time is correlated with the detection result, and a new moving state is updated for the corresponding target object.
  • the schematic diagram of FIG. 5 will be specifically described as an example.
  • the target object B and the target object C within the visual range can be monitored.
  • the target object B and the target object C are far away from the monitoring device, and the launch condition is not yet satisfied.
  • the position P, the velocity V and the orientation O of the B and C can be calculated;
  • the detection value of the current time B and C position and the time difference between the front and back time are determined by the detecting means.
  • the P, V, and O at the time T2 can be estimated, and based on the measured value and the estimated value.
  • the positional relationships are correlated to determine a one-to-one correspondence.
  • the distance between the target B and A has already satisfied the transmission condition, so the state estimation value of B at the time T3 (delay duration) is predicted based on the state (position, speed, etc.) of the time B, and (Pb', Ob') is obtained.
  • Pb' as the second state, according to the coordinate Pb'(x, y), the inertial measurement unit is used, the pitch angle is estimated by the Kalman filter, and the position loop and the speed loop are controlled, and the yaw axis adopts high stability.
  • the high-precision single-axis gyroscope module realizes the control of the position loop and the speed loop, and at the same time realizes the follow-up of the chassis.
  • the aiming of Pb' (x, y) is finally completed.
  • the association between the estimated value and the actual position mentioned in the foregoing method item embodiment means that the state of C is (Pc1, Vc1, Oc1) at time T1.
  • (Pc') can be obtained after estimation, and in the image at time T2, the position of the object including the closest detected distance (Pc') is (Pc2), and therefore, it can be associated to determine that the time T2 is at (Pc2).
  • the object is C, then at the time T2, the state of C can obtain the corresponding speed and moving direction according to the difference between Pc1 and Pc2 and the difference between T2 and T1.
  • the position, velocity and moving direction of the object B can be obtained at the time T2.
  • the determination of the distinction between object B and object C and the accurate status update are completed.
  • the position, speed, and direction involved in the foregoing method embodiments may be relative variables of the target object relative to the detecting device, and the time interval involved may also be a smaller time interval according to the requirement of accuracy.
  • the embodiment of the invention can accurately identify the target object from the image, and perform effective calculation of the moving state and the estimated moving state, and can quickly and accurately complete the tracking and state estimation of the target object, and add a new one.
  • the function satisfies the user's automation and intelligent requirements for object tracking and mobile state estimation.
  • FIG. 6 is a schematic structural diagram of a detecting apparatus according to an embodiment of the present invention.
  • the apparatus of the embodiment of the present invention may be disposed on an object such as a competitive robot.
  • the apparatus includes:
  • the identification module 1 is configured to identify a target object in the monitored area, and calculate first state information of the target object relative to the detecting device;
  • the processing module 2 is configured to estimate, according to the first state information, second state information after the target object has undergone a preset delay time value;
  • the control module 3 is configured to perform a processing operation on the target object according to the estimated second state information.
  • the identification module 1 can specifically call a module such as a camera to capture an image of the monitored area, and based on the preset color and/or contour shape, size, and the like of the target object, whether the target is included in the area at the current time. object. If included, the calculation of the first state information is performed, and if not, the camera angle or motion detection device continues to be converted, and the image is taken to find the target object.
  • This camera is a calibrated camera.
  • the identification module 1 can calculate the first state information of the target object with respect to the detecting device, and the first state information may specifically include: the target object is relative to the detecting device Information such as position, movement speed, direction, etc. The details can also be calculated based on the captured image including the target object.
  • the second state information estimated by the processing module 2 mainly includes a position and a direction of the target object relative to the detecting device.
  • the preset delay duration may be a preset time interval.
  • the preset delay duration is a device for adjusting the direction according to the estimated duration, the pan/tilt, and the like in the identification module 1.
  • the adjustment duration and the loading and/or launch duration of the competitive projectile are neutralized, wherein the estimated duration, the adjustment duration, and the loading and/or launch duration are duration values learned through a large amount of actual duration calculation training.
  • the control module 3 can perform a processing operation on the target object according to actual application requirements. For example, in the competitive robot, the direction angle of the device such as the pan/tilt may be adjusted according to the relative direction information in the second state information to point to the position in the second state information, and the athletic projectile is fired to strike the target. object.
  • the first state information includes: first location information, speed information, and moving direction information of the target object with respect to the detecting device;
  • the processing module 2 is configured to calculate a displacement of the target object after the preset delay time value according to the speed information; and estimate according to the first position information, the moving direction information, and the calculated displacement And second location information after the target object is moved; determining the second location information as second state information.
  • the identification module 1 may include:
  • the obtaining unit 11 is configured to acquire an image of the detected area
  • the identifying unit 12 is configured to analyze whether the target object having the specified color feature is included in the image
  • the determining unit 13 is configured to determine the identified target object as the target object when the recognition result of the identification unit is included.
  • the identifying unit 12 is specifically configured to perform color detection on the acquired image based on a preset color interval preset, and determine an initial target region having a specified color feature in the image;
  • the image is quantized into a binary image; the connected region is detected in the region corresponding to the initial object region in the binary image, and the outline of the initial target region is determined; and the determined contours are determined based on the preset merge rule.
  • the merge operation performs filtering on each area obtained after the merge according to the preset filter shape or filter size information, and uses the filtered area as the target object.
  • the identifying unit 12 is configured to filter out the noise point in the binary image when detecting the connected area in the area corresponding to the initial target area in the binary image; The area corresponding to the initial object area in the binary image after filtering out the noise point performs the connected area detection.
  • the identifying unit 12 is configured to calculate adjacent positions according to the determined edge position coordinates of each contour contour when performing the combining operation on the determined contour contours based on the preset combining rules. The distance between the two contours; the color similarity between the two adjacent contours; the two adjacent contours whose distance values, similarity meets the preset merge distance, and the similarity requirements are combined.
  • the identifying unit 12 is configured to detect whether an area between two adjacent connected areas meets a preset when performing a combining operation on the determined contours based on the preset merge rule.
  • the occlusion object feature if so, the adjacent two connected regions satisfy a preset merging rule, and the adjacent two contour profiles are merged.
  • the identification module 1 may further include:
  • a state calculating unit 14 configured to calculate, according to the monitored image, movement state information of the target object relative to the detecting device
  • a distance calculating unit 15 configured to calculate a distance value of the target object to the detecting device
  • the state processing unit 16 is configured to, if greater than the preset distance threshold, calculate the movement state information of the target object relative to the detecting device according to the monitored new image, and determine the target object to the detecting device again. Whether the distance value is less than a preset distance threshold, and repeating this step until it is determined that the movement state information is not greater than a preset distance threshold;
  • the state determining unit 17 is configured to determine the moving state information as the first state information if it is not greater than a preset distance threshold.
  • the state calculation unit 14 or the state processing unit 16 is specifically configured to determine the image obtained from the current time when calculating the movement state information of the target object relative to the detection device. Performing coordinate mapping conversion on the pixel coordinates of the target object to obtain initial position information of the target object in a coordinate system centered on the detecting device; determining the target object from the image acquired at the time after the preset time interval Performing coordinate mapping conversion on the pixel coordinates to obtain moving position information in the coordinate system of the target object centered on the detecting device; determining the target object according to the initial position information, the moving position information, and the preset time interval Speed information and direction information; determining the speed information, the direction information, and the movement position information as the movement state information of the target object.
  • the detecting device may further include:
  • the distinguishing module is configured to perform position estimation on each object to be monitored based on the moving state information of each object, and associate each object according to the obtained position estimation value and the actual position of each object to be monitored in the image.
  • control module 3 is specifically configured to: according to the second position information in the estimated second state information and the direction relative to the detecting device, adjust the rotation parameter of the pan/tilt to enable the gimbal to be mounted.
  • the load is aimed at the target object.
  • each module and unit in the monitoring according to the embodiment of the present invention may refer to the description of related steps in the corresponding embodiments in FIG. 1 to FIG. 5.
  • the embodiment of the invention can realize the recognition of the target object and the estimation of the moving position and the like, can quickly and accurately complete the tracking and state estimation of the target object, adds a new function, and satisfies the user's object tracking and movement. Automated, intelligent requirements for state estimation.
  • FIG. 8 it is a schematic structural diagram of a robot according to an embodiment of the present invention.
  • the robot according to the embodiment of the present invention includes an existing machine structure, such as a robot casing, a power system, various sensors, controllers, and the like.
  • the robot further includes: an image collection device 100 and a processor 200, wherein:
  • the image capture device 100 is configured to capture an image of a detected area
  • the processor 200 is configured to identify a target object in the monitored area according to the image captured by the image capturing device 100, and calculate first state information of the target object relative to the detecting device; according to the first state And estimating second state information after the target object has undergone a preset delay time value; performing a processing operation on the target object according to the estimated second state information.
  • the robot further includes a pan-tilt device, which may be a two-axis pan/tilt or a multi-axis pan/tilt, and can perform processing operations such as competitive striking on the target object by adjusting the angle and orientation.
  • a pan-tilt device which may be a two-axis pan/tilt or a multi-axis pan/tilt, and can perform processing operations such as competitive striking on the target object by adjusting the angle and orientation.
  • the processor 200 when the processor 200 is specifically implemented, the corresponding application may be invoked to execute the steps in the foregoing embodiments of FIG. 1 to FIG.
  • the embodiment of the invention can realize the recognition of the target object and the estimation of the moving position and the like, can quickly and accurately complete the tracking and state estimation of the target object, adds a new function, and satisfies the user's object tracking and movement. Automated, intelligent requirements for state estimation.
  • the related apparatus and method disclosed may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division.
  • there may be another division manner for example, multiple units or components may be used. Combinations can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer processor to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Automation & Control Theory (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Molecular Biology (AREA)
  • Human Computer Interaction (AREA)
  • Manufacturing & Machinery (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本发明实施例提供了一种对目标物体的检测方法、检测装置以及机器人,其中,所述方法包括:识别被监测区域内的目标物体,并计算所述目标物体相对于本检测装置的第一状态信息;根据所述第一状态信息,估算所述目标物体在经历了预置的延迟时长值后的第二状态信息;根据估算的第二状态信息执行对所述目标物体的处理操作。采用本发明,可以快捷、准确地完成对目标物体的跟踪以及状态预估,增加了新的功能,满足了用户关于物体追踪与移动状态预估的自动化、智能化需求。

Description

一种对目标物体的检测方法、检测装置以及机器人 技术领域
本发明涉及计算机控制技术领域,尤其涉及一种对目标物体的检测方法、检测装置以及机器人。
背景技术
目前,现有的目标的监测、识别以及跟踪一般是通过视频图像的跟踪来实现,基于视频图像的目标跟踪技术融合了视觉、模式识别以及人工智能等领域的技术。而该技术的主要应用包括对车辆的跟踪、识别,安防监控,以及人工智能机器人。
现有技术中基于视频图像的追踪基本上可以较准确地从图像中确定出某个目标物体。但是,在某些情况下,仅仅从视频图像中确定出目标物体还不能满足智能化需求,例如在人工智能机器人领域,特别是智能机器人竞技领域还需要更进一步改善现有的目标跟踪技术。
发明内容
本发明实施例提供了一种对目标物体的检测方法、检测装置以及机器人,可满足用户关于物体追踪与移动状态预估的自动化、智能化需求。
本发明实施例提供了一种对目标物体的检测方法,包括:
检测装置识别被监测区域内的目标物体,并计算所述目标物体相对于本检测装置的第一状态信息;
根据所述第一状态信息,估算所述目标物体在经历了预置的延迟时长值后的第二状态信息;
根据估算的第二状态信息执行对所述目标物体的处理操作。
其中可选地,所述第一状态信息包括:所述目标物体相对于本检测装置的第一位置信息、速度信息以及移动方向信息;
所述根据所述第一状态信息,估算所述目标物体在经历了预置的延迟时长值后的第二状态信息,包括:
所述检测装置根据所述速度信息,计算预置的延迟时长值后所述目标物体的位移;
根据所述第一位置信息、移动方向信息以及所述计算出的位移,估算所述目标物体移动后的第二位置信息;
将所述第二位置信息确定为第二状态信息。
其中可选地,所述识别被监测区域内的目标物体,包括:
所述检测装置获取被检测区域的图像;
分析识别所述图像中是否包含具有指定颜色特征的目标对象;
若包含,则将识别出的目标对象确定为目标物体。
其中可选地,所述分析识别所述图像中是否包含具有指定颜色特征的目标对象,包括:
基于预置的颜色区间预置对所述获取的图像进行颜色检测,确定出所述图像中的具有指定颜色特征的初始对象区域;
将获取的图像量化为二值图;
对二值图中所述初始对象区域所对应的区域进行连通区域检测,确定出初始对象区域的外形轮廓;
基于预置的合并规则对确定出的各个外形轮廓进行合并操作;
根据预置的过滤形状或过滤尺寸信息,对合并后得到的各个区域进行过滤,将过滤后的区域作为目标对象。
其中可选地,所述对二值图中所述初始对象区域所对应的区域进行连通区域检测,包括:
滤除所述二值图中的噪声点;
对滤除噪声点后的二值图中所述初始对象区域所对应的区域进行连通区域检测。
其中可选地,所述基于预置的合并规则对确定出的各个外形轮廓进行合并操作,包括:
根据确定出的各个外形轮廓的边缘位置坐标,计算相邻的两个外形轮廓之间的距离;
确定相邻的两个外形轮廓之间的颜色相似度;
将距离值、相似度满足预设的合并距离以及相似度要求的两个相邻外形轮廓合并。
其中可选地,所述所述基于预置的合并规则对确定出的各个外形轮廓进行合并操作,包括:
检测相邻两个连通区域之间的区域是否符合预置的遮挡物对象特征;
若是,则所述相邻的两个连通区域满足预置的合并规则,将该相邻的两个外形轮廓合并。
其中可选地,所述计算所述目标物体相对于本检测装置的第一状态信息,包括:
根据监测到的图像计算所述目标物体相对于本检测装置的移动状态信息;
计算所述目标物体到本检测装置的距离值;
如果大于预设的距离阈值,则再次根据监测到的新的图像计算所述目标物体相对于本检测装置的移动状态信息,并再次判断所述目标物体到本检测装置的距离值是否小于预设的距离阈值,重复执行本步骤,直至确定出不大于预设的距离阈值时的移动状态信息;
如果不大于预设的距离阈值,则将所述移动状态信息确定为第一状态信息。
其中可选地,计算所述目标物体相对于本检测装置的移动状态信息,包括:
从当前时刻获取的图像中确定出目标物体的像素坐标,进行坐标映射转换,得到所述目标物体在以所述检测装置为中心的坐标系中的初始位置信息;
从预置时间间隔后的时刻获取的图像中确定出目标物体的像素坐标,进行坐标映射转换,得到所述目标物体以所述检测装置为中心的坐标系中的移动位置信息;
根据根据初始位置信息、移动位置信息以及预置时间间隔,确定出所述目标物体的速度信息以及方向信息;
将确定出的速度信息、方向信息以及所述移动位置信息作为所述目标物体的移动状态信息。
其中可选地,所述方法还包括:
所述检测装置基于每一个物体的移动状态信息对各个待监测的物体进行位置预估,根据得到的位置预估值和图像中各个待监测的物体的实际位置来关联区分每一个物体。
其中可选地,所述根据估算的第二状态信息执行对所述目标物体的处理操作,包括:
根据估算出的第二状态信息中的第二位置信息和相对于检测装置的方向,调整云台的转动参数,以使云台挂载的负载瞄准所述目标物体。
相应地,本发明实施例还提供了一种检测装置,包括:
识别模块,用于识别被监测区域内的目标物体,并计算所述目标物体相对于本检测装置的第一状态信息;
处理模块,用于根据所述第一状态信息,估算所述目标物体在经历了预置的延迟时长值后的第二状态信息;
控制模块,用于根据估算的第二状态信息执行对所述目标物体的处理操作。
其中可选地,所述第一状态信息包括:所述目标物体相对于本检测装置的第一位置信息、速度信息以及移动方向信息;
所述处理模块,具体用于根据所述速度信息,计算预置的延迟时长值后所述目标物体的位移;根据所述第一位置信息、移动方向信息以及所述计算出的位移,估算所述目标物体移动后的第二位置信息;将所述第二位置信息确定为第二状态信息。
其中可选地,所述识别模块包括:
获取单元,用于获取被检测区域的图像;
识别单元,用于分析识别所述图像中是否包含具有指定颜色特征的目标对象;
确定单元,用于在所述识别单元的识别结果为包含时,则将识别出的目标对象确定为目标物体。
其中可选地,所述识别单元,具体用于基于预置的颜色区间预置对所述获取的图像进行颜色检测,确定出所述图像中的具有指定颜色特征的初始对象区域;将获取的图像量化为二值图;对二值图中所述初始对象区域所对应的区域进行连通区域检测,确定出初始对象区域的外形轮廓;基于预置的合并规则对确定出的各个外形轮廓进行合并操作;根据预置的过滤形状或过滤尺寸信息,对合并后得到的各个区域进行过滤,将过滤后的区域作为目标对象。
其中可选地,所述识别单元,在用于对二值图中所述初始对象区域所对应的区域进行连通区域检测时,具体用于滤除所述二值图中的噪声点;对滤除噪声点后的二值图中所述初始对象区域所对应的区域进行连通区域检测。
其中可选地,所述识别单元,在用于基于预置的合并规则对确定出的各个外形轮廓进行合并操作时,具体用于根据确定出的各个外形轮廓的边缘位置坐标,计算相邻的两个外形轮廓之间的距离;确定相邻的两个外形轮廓之间的颜色相似度;将距离值、相似度满足预设的合并距离以及相似度要求的两个相邻外形轮廓合并。
其中可选地,所述识别单元,在用于基于预置的合并规则对确定出的各个外形轮廓进行合并操作时,具体用于检测相邻两个连通区域之间的区域是否符合预置的遮挡物对象特征;若是,则所述相邻的两个连通区域满足预置的合并规则,将该相邻的两个外形轮廓合并。
其中可选地,所述识别模块还包括:
状态计算单元,用于根据监测到的图像计算所述目标物体相对于本检测装置的移动状态信息;
距离计算单元,用于计算所述目标物体到本检测装置的距离值;
状态处理单元,用于如果大于预设的距离阈值,则再次根据监测到的新的图像计算所述目标物体相对于本检测装置的移动状态信息,并再次判断所述目标物体到本检测装置的距离值是否小于预设的距离阈值,重复执行本步骤,直至确定出不大于预设的距离阈值时的移动状态信息;
状态确定单元,用于如果不大于预设的距离阈值,则将所述移动状态信息确定为第一状态信息。
其中可选地,所述状态计算单元或者所述状态处理单元,在用于计算所述目标物体相对于本检测装置的移动状态信息时,具体用于从当前时刻获取的图像中确定出目标物体的像素坐标,进行坐标映射转换,得到所述目标物体在以所述检测装置为中心的坐标系中的初始位置信息;从预置时间间隔后的时刻获取的图像中确定出目标物体的像素坐标,进行坐标映射转换,得到所述目标物体以所述检测装置为中心的坐标系中的移动位置信息;根据根据初始位置信息、移动位置信息以及预置时间间隔,确定出所述目标物体的速度信息以及方向信息;将确定出的速度信息、方向信息以及所述移动位置信息作为所述目标物体的移动状态信息。
其中可选地,所述装置还包括:
区分模块,用于基于每一个物体的移动状态信息对各个待监测的物体进行位置预估,根据得到的位置预估值和图像中各个待监测的物体的实际位置来关联区分每一个物体。
其中可选地,所述控制模块,具体用于根据估算出的第二状态信息中的第二位置信息和相对于检测装置的方向,调整云台的转动参数,以使云台挂载的负载瞄准所述目标物体。
相应地,本发明实施例还提供了一种机器人,包括:图像采集装置和处理器,其中:
所述图像采集装置,用于拍摄被检测区域的图像;
所述处理器,用于根据所述图像采集装置拍摄的图像识别被监测区域内的目标物体,并计算所述目标物体相对于本检测装置的第一状态信息;根据所述第一状态信息,估算所述目标物体在经历了预置的延迟时长值后的第二状态信息;根据估算的第二状态信息执行对所述目标物体的处理操作。
本发明实施例能够实现对目标物体的识别以及移动位置等状态的预估,能够快捷、准确地完成对目标物体的跟踪以及状态预估,增加了新的功能,满足了用户关于物体追踪与移动状态预估的自动化、智能化需求。
附图说明
图1是本发明实施例的一种对目标物体的检测方法的流程示意图;
图2是本发明实施例的另一种对目标物体的检测方法的流程示意图;
图3是本发明实施例的识别具有特定颜色特征的目标对象的方法流程示意图;
图4是本发明实施例的移动状态计算方法的流程示意图;
图5是目标物体的状态估计示意图;
图6是本发明实施例的一种检测装置的结构示意图;
图7是图6中的识别模块的其中一种结构示意图;
图8是本发明实施例的一种机器人的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明实施例能够基于目标物体当前的移动状态,预估该目标物体下一时刻(某个时间间隔后的时刻)的移动状态,实现目标物体跟踪、预判功能。具体可以通过图像中的像素坐标转换为实际位置坐标,然后基于时间间隔、在该时间间隔的实际位置坐标的位移,来计算运动速度和方向,确定出包括位置、运动速度以及方向的移动状态。
具体请参见图1,是本发明实施例的一种对目标物体的检测方法的流程示意图,本发明实施例的所述方法可以应用在物体追踪装置中,由检测装置来实现,例如竞技机器人等装置实现,具体的,所述方法包括:
S101:检测装置识别被监测区域内的目标物体,并计算所述目标物体相对于本检测装置的第一状态信息。
可以调用相机等模块来拍摄被监测区域的图像,基于预置的关于目标物体的颜色和/或轮廓形状、尺寸等来识别出在当前时刻下,该区域中是否包括目标物体。如果包括,则进行第一状态信息的计算,如果没有,则继续转换相机角度或移动检测装置,拍摄图像寻找目标物体。该相机为经过标定后的相机。
在监测到目标物体后,即可计算所述目标物体相对于本检测装置的第一状态信息,所述的第一状态信息具体可以包括:该目标物体相对于本检测装置的位置、移动速度、方向等信息。具体同样可以基于拍摄到的包括目标物体的图像来计算。
在具体实施时,目标物体相对于检测装置的位置的计算可以为:确定图像中所述目标物体所在区域的像素位置坐标;将像素坐标映射到以检测装置为中心的坐标系中,得到所述目标物体在以检测装置为中心的坐标系中的位置坐标,该位置坐标即为所述目标物体相对于本检测装置的位置信息,需要说明的是,所述的像素位置坐标和相对于检测装置的位置坐标可以仅仅是所述目标物体的几何中心点的位置坐标,而以检测装置为中心的坐标系则可以是指以所述检测装置的相机的中心点为坐标系的原点。
在得到的相对位置后,可以计算下一时刻(例如1秒后的时刻),所述目标物体的新的相对位置,基于该新的相对位置和上一时刻的相对位置能够得到目标物体的位移和相对移动方向,根据位移和时间间隔,则可以计算到此次的移动速度。
S102:根据所述第一状态信息,估算所述目标物体在经历了预置的延迟时长值后的第二状态信息。
具体的,估算出的第二状态信息主要包括所述目标物体相对于本检测装置的位置和方向。所述预置的延迟时长可以是预置的一个时间间隔,例如,对于机器人竞技中,该预置的延迟时长是根据所述S102中的估算时长、云台等用于方向调整的装置的调整时长以及竞技炮弹的装填和/或发射时长来中和判断的,其中,所述的估算时长、调整时长以及装填和/或发射时长是通过大量的实际时长计算训练学习出的时长值。
S103:根据估算的第二状态信息执行对所述目标物体的处理操作。
在得到所述第二状态信息后,即可根据实际的应用需要,执行对所述目标物体的处理操作。例如,在竞技机器人中,可根据第二状态信息中的相对方向信息,调整云台等装置的方向角度,以指向瞄准所述第二状态信息中的位置,并发射竞技炮弹以打击所述目标物体。
本发明实施例能够实现对目标物体的识别以及移动位置等状态的预估,能够快捷、准确地完成对目标物体的跟踪以及状态预估,增加了新的功能,满足了用户关于物体追踪与移动状态预估的自动化、智能化需求。
再请参见图2,是本发明实施例的另一种对目标物体的检测方法的流程示意图,本发明实施例的所述方法可以应用在物体追踪装置中,由检测装置来实现,例如竞技机器人等装置实现,具体的,所述方法包括:
S201:所述检测装置获取被检测区域的图像。
具体通过调用在本检测装置中配置的相机来拍摄获取当前环境下图像。
S202:分析识别所述图像中是否包含具有指定颜色特征的目标对象。
基于颜色识别策略对图像进行分析识别,可以简单、快捷地识别出图像中的目标对象。可以预先大量采集目标物体的图片数据,并进行训练,得到该目标物体可能存在的颜色区间,后续再以此颜色区间为阈值对在所述S201中获取到的图像进行颜色检测,找到疑似为目标物体的区域,即找到具有指定颜色特征的目标区域。
在所述S202中,对所述在S201中所获取图像的分析识别可参考图3对应实施例中的描述。如果找到疑似为目标物体的区域,则执行下述的S203,否则,重新执行S201和S202,直至在对应的图像中找到疑似为目标物体的区域。
S203:若包含,则将识别出的目标对象确定为目标物体。
S204:计算所述目标物体相对于本检测装置的第一状态信息。
具体的,所述第一状态信息包括所述目标物体相对于本检测装置的第一位置信息、速度信息以及移动方向信息。其中的位置信息可以通过图像像素坐标到以检测装置的相机为中心的实际坐标的映射转换来得到,而速度和移动方向则可以根据一定时间间隔的位置差来计算得到。
对于图像中目标物体的实际位置的获取,具体可以为:首先,上述提到的对相机进行标定是指:建立图像坐标系与实际空间坐标系之间的变换关系,以确定图像中的一点在实际空间中相对于检测装置(相机)的位置。具体的标定方法是在某个确定的场景中放置一个独特的标定物,并测量其相对于相机的位置,通过采集大量的图像数据,来计算两个坐标系之间的变换关系。在确定了坐标系之间的变换关系后,通过图像坐标即可确定目标物体的实际空间坐标。
其次,经过标定之后,可以将图像坐标系中的一点根据计算出的变换关系(旋转与平移等),计算在实际空间中的位置。需要说明的是,依靠单目相机无法准确还原出图像坐标中某点在实际空间中位置的,在本发明实施例中,在使用单目相机时具体可以忽略其高度信息,因而可得到一个相对准确的变换关系。
其中,计算所述第一状态信息可参考图4对应实施例中关于移动状态信息获取的描述。
S205:根据所述速度信息,计算预置的延迟时长值后所述目标物体的位移。
S206:根据所述第一位置信息、移动方向信息以及所述计算出的位移,估算所述目标物体移动后的第二位置信息。
S207:将所述第二位置信息确定为第二状态信息。
根据速度和时长的乘积可得到位移,而将乘积结合运动方向即可综合确定在此延迟时长值后所述目标物体在以本检测装置的相机为中心的坐标系中的确切位置。
在执行所述S205之前,还可以先判断所述目标物体距离本检测装置的距离,该距离值同样可以通过以以本检测装置的相机为中心的坐标系来计算。在距离本检测装置较近(一定距离阈值内)时,执行所述S205,否则,继续执行所述S201至S204。
S208:根据估算的第二状态信息执行对所述目标物体的处理操作。
其中,在本发明实施例中,所述S208具体可以包括:根据估算出的第二状态信息中的第二位置信息和相对于检测装置的方向,调整云台的转动参数,以使云台挂载的负载瞄准所述目标物体。
具体请参见图3,是本发明实施例的识别具有特定颜色特征的目标对象的方法流程示意图,所述方法对应于上述的S202,具体包括:
S301:基于预置的颜色区间预置对所述获取的图像进行颜色检测,确定出所述图像中的具有指定颜色特征的初始对象区域;
S302:将获取的图像量化为二值图;
可以采集大量目标物体的图片数据,并进行训练,得到该目标物体的颜色区间,在所述S301中以该颜色区间对获取到的图像进行颜色检测,找到该图像中疑似目标物体的区域,并量化成二值图。
S303:对二值图中所述初始对象区域所对应的区域进行连通区域检测,确定出初始对象区域的外形轮廓;
其中具体的,所述对二值图中所述初始对象区域所对应的区域进行连通区域检测,包括:滤除所述二值图中的噪声点;对滤除噪声点后的二值图中所述初始对象区域所对应的区域进行连通区域检测。
可以对得到的二值图像进行开运算处理,滤除颜色检测结果中的噪声点。具体的由于本系统是通过颜色识别的方法来检测目标物体,而环境中可能存在与目标物体的颜色一致或相似的其他物体,或者因为光照的原因使图像中产生对应的颜色区域。在通过颜色信息过滤时,有可能会保留不是目标的颜色区域,因此要予以滤除。在噪声的滤除过程中,可以通过分析实际数据,发现噪声区域的大小和形状等与目标车辆有很大的区别,因此,可以通过大小限制和形状限制(长宽比,周长面积比等)对噪声点予以滤除。
S304:基于预置的合并规则对确定出的各个外形轮廓进行合并操作;
其中,所述S304具体可以包括:根据确定出的各个外形轮廓的边缘位置坐标,计算相邻的两个外形轮廓之间的距离;确定相邻的两个外形轮廓之间的颜色相似度;将距离值、相似度满足预设的合并距离以及相似度要求的两个相邻外形轮廓合并。
对图像的疑似区域进行连通区域检测,计算出每个疑似区域的外部轮廓,可以用矩形近似表示。根据连通区域之间的距离和颜色相似性,将距离接近且相似性高的连通区域进行合并。
或者,所述S304具体可以包括:检测相邻两个连通区域之间的区域是否符合预置的遮挡物对象特征;若是,则所述相邻的两个连通区域满足预置的合并规则,将该相邻的两个外形轮廓合并。
S305:根据预置的过滤形状或过滤尺寸信息,对合并后得到的各个区域进行过滤,将过滤后的区域作为目标对象。
进行外形轮廓合并操作后得到的外形可能仅仅是一些环境中柱子等物体,需要将这些物体过滤。具体可以预先对大量的目标物体的图像进行训练,得到目标物体的形状(例如长宽高比)和/或尺寸信息(面积),并以此为依据将不符合的区域过滤掉。
经过上述的S301至S305的步骤,基本可以确定出图像中关于目标物体的图形对象,进而执行对该目标物体的追踪以及位置、速度以及移动方向的检测。
再请参见图4,是本发明实施例的移动状态计算方法的流程示意图,本发明实施例的所述方法具体包括:
S401:根据监测到的图像计算所述目标物体相对于本检测装置的移动状态信息;
S402:计算所述目标物体到本检测装置的距离值;
S403:如果大于预设的距离阈值,则再次根据新的图像计算所述目标物体相对于本检测装置的移动状态信息,并再次判断所述目标物体到本检测装置的距离值是否小于预设的距离阈值,重复执行本步骤,直至确定出不大于预设的距离阈值时的移动状态信息;其中,每一次在判断距离后获取新的图像可以在经过一定的时间间隔后才执行。
S404:如果不大于预设的距离阈值,则将所述移动状态信息确定为第一状态信息。
在所述目标物体距离本检测装置较远时,可能无法对其执行对应的处理操作,例如在竞技机器人中无法进行瞄准打击。因此,需要在距离较近时,才将对应的移动状态信息确定为第一状态信息,以供后续确定第二状态信息。而在目标物体(或本检测装置)移动过程中,所述S403持续监测到的移动状态信息需要一直进行更新,直至目标物体到本检测装置的距离不大于预设的距离阈值。
其中,为了使本检测装置尽快接近所述目标物体,可以根据检测到的所述目标物体的移动状态信息(位置、方向等),生成控制信号控制本端的动力组件,使本检测装置向所述目标物体移动。
其中,上述计算所述目标物体相对于本检测装置的移动状态信息的步骤具体可以包括:
从当前时刻获取的图像中确定出目标物体的像素坐标,进行坐标映射转换,得到所述目标物体在以所述检测装置为中心的坐标系中的初始位置信息;
从预置时间间隔后的时刻获取的图像中确定出目标物体的像素坐标,进行坐标映射转换,得到所述目标物体以所述检测装置为中心的坐标系中的移动位置信息;
根据根据初始位置信息、移动位置信息以及预置时间间隔,确定出所述目标物体的速度信息以及方向信息;
将确定出的速度信息、方向信息以及所述移动位置信息作为所述目标物体的移动状态信息。
其中,所述检测装置基于每一个物体的移动状态信息对各个待监测的物体进行位置预估,根据得到的位置预估值和图像中各个待监测的物体的实际位置进行关联来确定每一个物体。也就是说,检测装置可以根据上一时刻目标物体的移动状态(位置、速度、方向),估计当前时刻目标物体的移动状态(新的位置、速度、方向),然后根据位置和颜色信息,对当前时刻目标物体的估计状态和检测结果进行关联,为对应的目标物体更新新的移动状态。
具体以图5的示意图为例来进行说明。对于挂载有本监测装置的物体A,可以对视觉范围内的目标物体B和目标物体C进行监控。在T1时刻,目标物体B和目标物体C距离本监测装置较远,尚不满足发射条件,此时根据前一时刻信息,可以计算出B和C的位置P、速度V和方位O;T2时刻,通过检测手段确定当前时刻B和C位置的测量值以及前后时刻的时间差,根据T1时刻B和C的状态及时间差,可以估计T2时刻的P、V以及O,并根据测量值和估计值之间的位置关系进行关联,来确定一一对应关系。在T2时刻,目标B与A的距离已经满足发射条件,所以根据该时刻B的状态(位置和速度等)预测B在T3(延迟时长)时刻的状态估计值,得到(Pb’、Ob’),将Pb’作为第二状态,根据该坐标Pb’(x、y),采用惯性测量单元,通过卡尔曼滤波器估计俯仰角度,实现位置环和速度环的控制,偏转轴则采用高稳定性、高精度单轴陀螺仪模块实现位置环和速度环的控制,同时实现底盘的随动。最终完成对Pb’(x、y)的瞄准。
需要指出的是,以图5为例,在上述方法项实施例中提到的基于预估值和实际位置进行关联区分是指:在T1时刻,C的状态为(Pc1、Vc1,Oc1),预估后可以得到(Pc’),而在T2时刻的图像中,检测到距离(Pc’)最近的包括物体的位置为(Pc2),因此,可以关联确定出T2时刻在(Pc2)处的物体为C,那么在T2时刻,C的状态即可根据Pc1和Pc2的差值、T2与T1的差值,得到对应的速度和移动方向。同理可得到T2时刻,物体B的位置、速度以及移动方向。完成物体B和物体C的区分确定以及准确的状态更新。
其中,上述方法实施例中所涉及到的位置、速度以及方向可以是目标物体相对于所述检测装置的相对变量,其中所涉及的时间间隔也可以根据精度的需要为一个较小的时间间隔。
本发明实施例能够较为准确地从图像中识别出目标物体,并进行有效的移动状态和预估移动状态的计算,能够快捷、准确地完成对目标物体的跟踪以及状态预估,增加了新的功能,满足了用户关于物体追踪与移动状态预估的自动化、智能化需求。
下面对本发明实施例的检测装置及机器人进行详细描述。
请参见图6,是本发明实施例的一种检测装置的结构示意图,本发明实施例的所述装置可设置在竞技机器人等物体上,具体的,所述装置包括:
识别模块1,用于识别被监测区域内的目标物体,并计算所述目标物体相对于本检测装置的第一状态信息;
处理模块2,用于根据所述第一状态信息,估算所述目标物体在经历了预置的延迟时长值后的第二状态信息;
控制模块3,用于根据估算的第二状态信息执行对所述目标物体的处理操作。
所述识别模块1具体可以调用相机等模块来拍摄被监测区域的图像,基于预置的关于目标物体的颜色和/或轮廓形状、尺寸等来识别出在当前时刻下,该区域中是否包括目标物体。如果包括,则进行第一状态信息的计算,如果没有,则继续转换相机角度或移动检测装置,拍摄图像寻找目标物体。该相机为经过标定后的相机。
在监测到目标物体后,所述识别模块1即可计算所述目标物体相对于本检测装置的第一状态信息,所述的第一状态信息具体可以包括:该目标物体相对于本检测装置的位置、移动速度、方向等信息。具体同样可以基于拍摄到的包括目标物体的图像来计算。
具体的,所述处理模块2估算出的第二状态信息主要包括所述目标物体相对于本检测装置的位置和方向。所述预置的延迟时长可以是预置的一个时间间隔,例如,对于机器人竞技中,该预置的延迟时长是根据所述识别模块1中的估算时长、云台等用于方向调整的装置的调整时长以及竞技炮弹的装填和/或发射时长来中和判断的,其中,所述的估算时长、调整时长以及装填和/或发射时长是通过大量的实际时长计算训练学习出的时长值。
在得到所述第二状态信息后,所述控制模块3即可根据实际的应用需要,执行对所述目标物体的处理操作。例如,在竞技机器人中,可根据第二状态信息中的相对方向信息,调整云台等装置的方向角度,以指向瞄准所述第二状态信息中的位置,并发射竞技炮弹以打击所述目标物体。
具体可选地,所述第一状态信息包括:所述目标物体相对于本检测装置的第一位置信息、速度信息以及移动方向信息;
所述处理模块2,具体用于根据所述速度信息,计算预置的延迟时长值后所述目标物体的位移;根据所述第一位置信息、移动方向信息以及所述计算出的位移,估算所述目标物体移动后的第二位置信息;将所述第二位置信息确定为第二状态信息。
具体可选地,请参见图7,所述识别模块1可以包括:
获取单元11,用于获取被检测区域的图像;
识别单元12,用于分析识别所述图像中是否包含具有指定颜色特征的目标对象;
确定单元13,用于在所述识别单元的识别结果为包含时,则将识别出的目标对象确定为目标物体。
具体可选地,所述识别单元12,具体用于基于预置的颜色区间预置对所述获取的图像进行颜色检测,确定出所述图像中的具有指定颜色特征的初始对象区域;将获取的图像量化为二值图;对二值图中所述初始对象区域所对应的区域进行连通区域检测,确定出初始对象区域的外形轮廓;基于预置的合并规则对确定出的各个外形轮廓进行合并操作;根据预置的过滤形状或过滤尺寸信息,对合并后得到的各个区域进行过滤,将过滤后的区域作为目标对象。
具体可选地,所述识别单元12,在用于对二值图中所述初始对象区域所对应的区域进行连通区域检测时,具体用于滤除所述二值图中的噪声点;对滤除噪声点后的二值图中所述初始对象区域所对应的区域进行连通区域检测。
具体可选地,所述识别单元12,在用于基于预置的合并规则对确定出的各个外形轮廓进行合并操作时,具体用于根据确定出的各个外形轮廓的边缘位置坐标,计算相邻的两个外形轮廓之间的距离;确定相邻的两个外形轮廓之间的颜色相似度;将距离值、相似度满足预设的合并距离以及相似度要求的两个相邻外形轮廓合并。
具体可选地,所述识别单元12,在用于基于预置的合并规则对确定出的各个外形轮廓进行合并操作时,具体用于检测相邻两个连通区域之间的区域是否符合预置的遮挡物对象特征;若是,则所述相邻的两个连通区域满足预置的合并规则,将该相邻的两个外形轮廓合并。
具体可选地,请参见图7,所述识别模块1还可以包括:
状态计算单元14,用于根据监测到的图像计算所述目标物体相对于本检测装置的移动状态信息;
距离计算单元15,用于计算所述目标物体到本检测装置的距离值;
状态处理单元16,用于如果大于预设的距离阈值,则再次根据监测到的新的图像计算所述目标物体相对于本检测装置的移动状态信息,并再次判断所述目标物体到本检测装置的距离值是否小于预设的距离阈值,重复执行本步骤,直至确定出不大于预设的距离阈值时的移动状态信息;
状态确定单元17,用于如果不大于预设的距离阈值,则将所述移动状态信息确定为第一状态信息。
具体可选地,所述状态计算单元14或者所述状态处理单元16,在用于计算所述目标物体相对于本检测装置的移动状态信息时,具体用于从当前时刻获取的图像中确定出目标物体的像素坐标,进行坐标映射转换,得到所述目标物体在以所述检测装置为中心的坐标系中的初始位置信息;从预置时间间隔后的时刻获取的图像中确定出目标物体的像素坐标,进行坐标映射转换,得到所述目标物体以所述检测装置为中心的坐标系中的移动位置信息;根据根据初始位置信息、移动位置信息以及预置时间间隔,确定出所述目标物体的速度信息以及方向信息;将确定出的速度信息、方向信息以及所述移动位置信息作为所述目标物体的移动状态信息。
具体可选地,所述检测装置还可以包括:
区分模块,用于基于每一个物体的移动状态信息对各个待监测的物体进行位置预估,根据得到的位置预估值和图像中各个待监测的物体的实际位置来关联区分每一个物体。
具体可选地,所述控制模块3,具体用于根据估算出的第二状态信息中的第二位置信息和相对于检测装置的方向,调整云台的转动参数,以使云台挂载的负载瞄准所述目标物体。
需要说明的是,本发明实施例的所述监测中各个模块及单元的具体实现可参考图1至图5对应实施例中相关步骤的描述。
本发明实施例能够实现对目标物体的识别以及移动位置等状态的预估,能够快捷、准确地完成对目标物体的跟踪以及状态预估,增加了新的功能,满足了用户关于物体追踪与移动状态预估的自动化、智能化需求。
进一步如图8所示,是本发明实施例的一种机器人的结构示意图,本发明实施例所述的机器人包括现有的机器结构,例如机器人外壳、动力系统、各种传感器、控制器等,在本发明实施例中,所述机器人还包括:图像采集装置100和处理器200,其中:
所述图像采集装置100,用于拍摄被检测区域的图像;
所述处理器200,用于根据所述图像采集装置100拍摄的图像识别被监测区域内的目标物体,并计算所述目标物体相对于本检测装置的第一状态信息;根据所述第一状态信息,估算所述目标物体在经历了预置的延迟时长值后的第二状态信息;根据估算的第二状态信息执行对所述目标物体的处理操作。
所述机器人还包括云台装置,该云台装置具体可以为两轴云台或者多轴云台,能够通过调整角度、朝向的方式实现对所述目标物体的诸如竞技打击等处理操作。
进一步地,所述处理器200在具体实现时,可以调用对应的应用程序执行上述图1至图4对应实施例中各个步骤。
本发明实施例能够实现对目标物体的识别以及移动位置等状态的预估,能够快捷、准确地完成对目标物体的跟踪以及状态预估,增加了新的功能,满足了用户关于物体追踪与移动状态预估的自动化、智能化需求。
在本发明所提供的几个实施例中,应该理解到,所揭露的相关装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得计算机处理器(processor)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述仅为本发明的实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (23)

  1. 一种对目标物体的检测方法,其特征在于,包括:
    检测装置识别被监测区域内的目标物体,并计算所述目标物体相对于本检测装置的第一状态信息;
    根据所述第一状态信息,估算所述目标物体在经历了预置的延迟时长值后的第二状态信息;
    根据估算的第二状态信息执行对所述目标物体的处理操作。
  2. 如权利要求1所述的方法,其特征在于,所述第一状态信息包括:所述目标物体相对于本检测装置的第一位置信息、速度信息以及移动方向信息;
    所述根据所述第一状态信息,估算所述目标物体在经历了预置的延迟时长值后的第二状态信息,包括:
    所述检测装置根据所述速度信息,计算预置的延迟时长值后所述目标物体的位移;
    根据所述第一位置信息、移动方向信息以及所述计算出的位移,估算所述目标物体移动后的第二位置信息;
    将所述第二位置信息确定为第二状态信息。
  3. 如权利要求1所述的方法,其特征在于,所述识别被监测区域内的目标物体,包括:
    所述检测装置获取被检测区域的图像;
    分析识别所述图像中是否包含具有指定颜色特征的目标对象;
    若包含,则将识别出的目标对象确定为目标物体。
  4. 如权利要求3所述的方法,其特征在于,所述分析识别所述图像中是否包含具有指定颜色特征的目标对象,包括:
    基于预置的颜色区间预置对所述获取的图像进行颜色检测,确定出所述图像中的具有指定颜色特征的初始对象区域;
    将获取的图像量化为二值图;
    对二值图中所述初始对象区域所对应的区域进行连通区域检测,确定出初始对象区域的外形轮廓;
    基于预置的合并规则对确定出的各个外形轮廓进行合并操作;
    根据预置的过滤形状或过滤尺寸信息,对合并后得到的各个区域进行过滤,将过滤后的区域作为目标对象。
  5. 如权利要求4所述的方法,其特征在于,所述对二值图中所述初始对象区域所对应的区域进行连通区域检测,包括:
    滤除所述二值图中的噪声点;
    对滤除噪声点后的二值图中所述初始对象区域所对应的区域进行连通区域检测。
  6. 如权利要求4所述的方法,其特征在于,所述基于预置的合并规则对确定出的各个外形轮廓进行合并操作,包括:
    根据确定出的各个外形轮廓的边缘位置坐标,计算相邻的两个外形轮廓之间的距离;
    确定相邻的两个外形轮廓之间的颜色相似度;
    将距离值、相似度满足预设的合并距离以及相似度要求的两个相邻外形轮廓合并。
  7. 如权利要求4所述的方法,其特征在于,所述所述基于预置的合并规则对确定出的各个外形轮廓进行合并操作,包括:
    检测相邻两个连通区域之间的区域是否符合预置的遮挡物对象特征;
    若是,则所述相邻的两个连通区域满足预置的合并规则,将该相邻的两个外形轮廓合并。
  8. 如权利要求3所述的方法,其特征在于,所述计算所述目标物体相对于本检测装置的第一状态信息,包括:
    根据监测到的图像计算所述目标物体相对于本检测装置的移动状态信息;
    计算所述目标物体到本检测装置的距离值;
    如果大于预设的距离阈值,则再次根据新的图像计算所述目标物体相对于本检测装置的移动状态信息,并再次判断所述目标物体到本检测装置的距离值是否小于预设的距离阈值,重复执行本步骤,直至确定出不大于预设的距离阈值时的移动状态信息;
    如果不大于预设的距离阈值,则将所述移动状态信息确定为第一状态信息。
  9. 如权利要求8所述的方法,其特征在于,计算所述目标物体相对于本检测装置的移动状态信息,包括:
    从当前时刻获取的图像中确定出目标物体的像素坐标,进行坐标映射转换,得到所述目标物体在以所述检测装置为中心的坐标系中的初始位置信息;
    从预置时间间隔后的时刻获取的图像中确定出目标物体的像素坐标,进行坐标映射转换,得到所述目标物体以所述检测装置为中心的坐标系中的移动位置信息;
    根据根据初始位置信息、移动位置信息以及预置时间间隔,确定出所述目标物体的速度信息以及方向信息;
    将确定出的速度信息、方向信息以及所述移动位置信息作为所述目标物体的移动状态信息。
  10. 如权利要求9所述的方法,其特征在于,还包括:
    所述检测装置基于每一个物体的移动状态信息对各个待监测的物体进行位置预估,根据得到的位置预估值和图像中各个待监测的物体的实际位置来关联确定每一个物体。
  11. 如权利要求2所述的方法,其特征在于,所述根据估算的第二状态信息执行对所述目标物体的处理操作,包括:
    根据估算出的第二状态信息中的第二位置信息和相对于检测装置的方向,调整云台的转动参数,以使云台挂载的负载瞄准所述目标物体。
  12. 一种检测装置,其特征在于,包括:
    识别模块,用于识别被监测区域内的目标物体,并计算所述目标物体相对于本检测装置的第一状态信息;
    处理模块,用于根据所述第一状态信息,估算所述目标物体在经历了预置的延迟时长值后的第二状态信息;
    控制模块,用于根据估算的第二状态信息执行对所述目标物体的处理操作。
  13. 如权利要求12所述的装置,其特征在于,所述第一状态信息包括:所述目标物体相对于本检测装置的第一位置信息、速度信息以及移动方向信息;
    所述处理模块,具体用于根据所述速度信息,计算预置的延迟时长值后所述目标物体的位移;根据所述第一位置信息、移动方向信息以及所述计算出的位移,估算所述目标物体移动后的第二位置信息;将所述第二位置信息确定为第二状态信息。
  14. 如权利要求12所述的装置,其特征在于,所述识别模块包括:
    获取单元,用于获取被检测区域的图像;
    识别单元,用于分析识别所述图像中是否包含具有指定颜色特征的目标对象;
    确定单元,用于在所述识别单元的识别结果为包含时,则将识别出的目标对象确定为目标物体。
  15. 如权利要求14所述的装置,其特征在于,
    所述识别单元,具体用于基于预置的颜色区间预置对所述获取的图像进行颜色检测,确定出所述图像中的具有指定颜色特征的初始对象区域;将获取的图像量化为二值图;对二值图中所述初始对象区域所对应的区域进行连通区域检测,确定出初始对象区域的外形轮廓;基于预置的合并规则对确定出的各个外形轮廓进行合并操作;根据预置的过滤形状或过滤尺寸信息,对合并后得到的各个区域进行过滤,将过滤后的区域作为目标对象。
  16. 如权利要求15所述的装置,其特征在于,
    所述识别单元,在用于对二值图中所述初始对象区域所对应的区域进行连通区域检测时,具体用于滤除所述二值图中的噪声点;对滤除噪声点后的二值图中所述初始对象区域所对应的区域进行连通区域检测。
  17. 如权利要求15所述的装置,其特征在于,
    所述识别单元,在用于基于预置的合并规则对确定出的各个外形轮廓进行合并操作时,具体用于根据确定出的各个外形轮廓的边缘位置坐标,计算相邻的两个外形轮廓之间的距离;确定相邻的两个外形轮廓之间的颜色相似度;将距离值、相似度满足预设的合并距离以及相似度要求的两个相邻外形轮廓合并。
  18. 如权利要求15所述的装置,其特征在于,
    所述识别单元,在用于基于预置的合并规则对确定出的各个外形轮廓进行合并操作时,具体用于检测相邻两个连通区域之间的区域是否符合预置的遮挡物对象特征;若是,则所述相邻的两个连通区域满足预置的合并规则,将该相邻的两个外形轮廓合并。
  19. 如权利要求14所述的装置,其特征在于,所述识别模块还包括:
    状态计算单元,用于根据监测到的图像计算所述目标物体相对于本检测装置的移动状态信息;
    距离计算单元,用于计算所述目标物体到本检测装置的距离值;
    状态处理单元,用于如果大于预设的距离阈值,则再次根据监测到的新的图像计算所述目标物体相对于本检测装置的移动状态信息,并再次判断所述目标物体到本检测装置的距离值是否小于预设的距离阈值,重复执行本步骤,直至确定出不大于预设的距离阈值时的移动状态信息;
    状态确定单元,用于如果不大于预设的距离阈值,则将所述移动状态信息确定为第一状态信息。
  20. 如权利要求19所述的装置,其特征在于,所述状态计算单元或者所述状态处理单元,在用于计算所述目标物体相对于本检测装置的移动状态信息时,具体用于从当前时刻获取的图像中确定出目标物体的像素坐标,进行坐标映射转换,得到所述目标物体在以所述检测装置为中心的坐标系中的初始位置信息;从预置时间间隔后的时刻获取的图像中确定出目标物体的像素坐标,进行坐标映射转换,得到所述目标物体以所述检测装置为中心的坐标系中的移动位置信息;根据根据初始位置信息、移动位置信息以及预置时间间隔,确定出所述目标物体的速度信息以及方向信息;将确定出的速度信息、方向信息以及所述移动位置信息作为所述目标物体的移动状态信息。
  21. 如权利要求20所述的装置,其特征在于,还包括:
    区分模块,用于基于每一个物体的移动状态信息对各个待监测的物体进行位置预估,根据得到的位置预估值和图像中各个待监测的物体的实际位置来关联区分每一个物体。
  22. 如权利要求13所述的装置,其特征在于,
    所述控制模块,具体用于根据估算出的第二状态信息中的第二位置信息和相对于检测装置的方向,调整云台的转动参数,以使云台挂载的负载瞄准所述目标物体。
  23. 一种机器人,其特征在于,包括:图像采集装置和处理器,其中:
    所述图像采集装置,用于拍摄被检测区域的图像;
    所述处理器,用于根据所述图像采集装置拍摄的图像识别被监测区域内的目标物体,并计算所述目标物体相对于本检测装置的第一状态信息;根据所述第一状态信息,估算所述目标物体在经历了预置的延迟时长值后的第二状态信息;根据估算的第二状态信息执行对所述目标物体的处理操作。
PCT/CN2014/090907 2014-11-12 2014-11-12 一种对目标物体的检测方法、检测装置以及机器人 WO2016074169A1 (zh)

Priority Applications (5)

Application Number Priority Date Filing Date Title
CN201480021249.9A CN105518702B (zh) 2014-11-12 2014-11-12 一种对目标物体的检测方法、检测装置以及机器人
PCT/CN2014/090907 WO2016074169A1 (zh) 2014-11-12 2014-11-12 一种对目标物体的检测方法、检测装置以及机器人
JP2016558217A JP6310093B2 (ja) 2014-11-12 2014-11-12 目標物体の検出方法、検出装置及びロボット
US15/593,559 US10551854B2 (en) 2014-11-12 2017-05-12 Method for detecting target object, detection apparatus and robot
US16/773,011 US11392146B2 (en) 2014-11-12 2020-01-27 Method for detecting target object, detection apparatus and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2014/090907 WO2016074169A1 (zh) 2014-11-12 2014-11-12 一种对目标物体的检测方法、检测装置以及机器人

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/593,559 Continuation US10551854B2 (en) 2014-11-12 2017-05-12 Method for detecting target object, detection apparatus and robot

Publications (1)

Publication Number Publication Date
WO2016074169A1 true WO2016074169A1 (zh) 2016-05-19

Family

ID=55725021

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/090907 WO2016074169A1 (zh) 2014-11-12 2014-11-12 一种对目标物体的检测方法、检测装置以及机器人

Country Status (4)

Country Link
US (2) US10551854B2 (zh)
JP (1) JP6310093B2 (zh)
CN (1) CN105518702B (zh)
WO (1) WO2016074169A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018027339A1 (en) * 2016-08-06 2018-02-15 SZ DJI Technology Co., Ltd. Copyright notice
CN113612967A (zh) * 2021-07-19 2021-11-05 深圳华跃云鹏科技有限公司 一种监控区域摄像头自组网系统
WO2023051774A1 (zh) * 2021-10-01 2023-04-06 南宁市安普康商贸有限公司 监测方法、系统、装置及计算机程序产品
WO2023123325A1 (zh) * 2021-12-31 2023-07-06 华为技术有限公司 一种状态估计方法和装置

Families Citing this family (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6310093B2 (ja) * 2014-11-12 2018-04-11 エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd 目標物体の検出方法、検出装置及びロボット
CN106067021B (zh) * 2016-05-26 2019-05-24 北京新长征天高智机科技有限公司 一种人工辅助的生活垃圾目标识别系统
CN107491714B (zh) * 2016-06-13 2022-04-05 中科晶锐(苏州)科技有限公司 智能机器人及其目标物体识别方法和装置
JP6820066B2 (ja) * 2016-07-29 2021-01-27 Necソリューションイノベータ株式会社 移動体操縦システム、操縦シグナル送信システム、移動体操縦方法、プログラム、および記録媒体
JP6602743B2 (ja) * 2016-12-08 2019-11-06 株式会社ソニー・インタラクティブエンタテインメント 情報処理装置および情報処理方法
CN106897665B (zh) * 2017-01-17 2020-08-18 北京光年无限科技有限公司 应用于智能机器人的物体识别方法及系统
CN106934833B (zh) * 2017-02-06 2019-09-10 华中科技大学无锡研究院 一种散乱堆放物料拾取装置和方法
CN106970618A (zh) * 2017-04-06 2017-07-21 北京臻迪科技股份有限公司 一种无人船控制方法及系统
CN107054412B (zh) * 2017-04-30 2018-03-30 中南大学 一种铁路车辆转向架冰雪的无人机智能测量与预警方法及系统
CN107097812B (zh) * 2017-04-30 2018-03-02 中南大学 一种铁路强降雨量无人机实时智能测量方法及系统
CN106970581B (zh) * 2017-04-30 2018-03-30 中南大学 一种基于无人机群三维全视角的列车受电弓实时智能监测方法及系统
CN107097810B (zh) * 2017-04-30 2018-04-20 中南大学 一种铁路沿线异物侵限无人机智能辨识和预警方法及系统
CN107054411B (zh) * 2017-04-30 2018-03-02 中南大学 一种铁路沿线雪灾无人机雪深智能测量和预测方法与系统
WO2019084796A1 (zh) * 2017-10-31 2019-05-09 深圳市大疆创新科技有限公司 无人飞行器、无人飞行器底座及无人飞行器系统
CN108154521B (zh) * 2017-12-07 2021-05-04 中国航空工业集团公司洛阳电光设备研究所 一种基于目标块融合的运动目标检测方法
CN109932926A (zh) * 2017-12-19 2019-06-25 帝斯贝思数字信号处理和控制工程有限公司 低延时的用于图像处理系统的试验台
CN109992008A (zh) * 2017-12-29 2019-07-09 深圳市优必选科技有限公司 一种机器人的目标跟随方法及装置
CN108897777B (zh) * 2018-06-01 2022-06-17 深圳市商汤科技有限公司 目标对象追踪方法及装置、电子设备和存储介质
CN109492521B (zh) * 2018-09-13 2022-05-13 北京米文动力科技有限公司 一种人脸定位方法及机器人
JP6696094B2 (ja) * 2018-09-27 2020-05-20 エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd 移動体、制御方法、及びプログラム
CN109633676A (zh) * 2018-11-22 2019-04-16 浙江中车电车有限公司 一种基于激光雷达侦测障碍物运动方向的方法及系统
CN109615858A (zh) * 2018-12-21 2019-04-12 深圳信路通智能技术有限公司 一种基于深度学习的智能停车行为判断方法
CN109664321A (zh) * 2018-12-27 2019-04-23 四川文理学院 机械臂、排爆小车及搜寻方法
CN109556484A (zh) * 2018-12-30 2019-04-02 深圳华侨城文化旅游科技股份有限公司 一种检测物体移动到位的方法及系统
CN109753945B (zh) * 2019-01-16 2021-07-13 高翔 目标主体识别方法、装置、存储介质和电子设备
CN109933096B (zh) * 2019-03-15 2021-11-30 国网智能科技股份有限公司 一种云台伺服控制方法及系统
DE112020001434T5 (de) * 2019-03-27 2021-12-23 Sony Group Corporation Datenverarbeitungsvorrichtung, datenverarbeitungsverfahren und programm
WO2020220284A1 (zh) * 2019-04-30 2020-11-05 深圳市大疆创新科技有限公司 一种瞄准控制方法、移动机器人及计算机可读存储介质
WO2020258187A1 (zh) * 2019-06-27 2020-12-30 深圳市大疆创新科技有限公司 一种状态检测方法、装置及可移动平台
US11055543B2 (en) * 2019-07-26 2021-07-06 Volkswagen Ag Road curvature generation in real-world images as a method of data augmentation
WO2021026804A1 (zh) * 2019-08-14 2021-02-18 深圳市大疆创新科技有限公司 基于云台的目标跟随方法、装置、云台和计算机存储介质
CN110640733B (zh) * 2019-10-10 2021-10-26 科大讯飞(苏州)科技有限公司 一种流程执行的控制方法及装置、饮品售卖系统
CN111061265A (zh) * 2019-12-06 2020-04-24 常州节卡智能装备有限公司 一种物体搬运方法、装置及系统
CN111263116B (zh) * 2020-02-17 2021-07-02 深圳龙安电力科技有限公司 一种基于视觉距离的智能监控系统
CN112200828B (zh) * 2020-09-03 2024-08-20 浙江大华技术股份有限公司 一种逃票行为的检测方法、装置及可读存储介质
CN112535434B (zh) * 2020-10-23 2022-01-11 湖南新视电子技术有限公司 一种无尘室智能扫地机器人
CN112643719A (zh) * 2020-12-11 2021-04-13 国网智能科技股份有限公司 一种基于巡检机器人的隧道安防检测方法及系统
CN113744305B (zh) * 2021-01-20 2023-12-05 北京京东乾石科技有限公司 目标物检测方法、装置、电子设备和计算机存储介质
CN112907624B (zh) * 2021-01-27 2022-07-15 湖北航天技术研究院总体设计所 一种基于多波段信息融合的目标定位、跟踪方法及系统
CN113510697B (zh) * 2021-04-23 2023-02-14 知守科技(杭州)有限公司 机械手定位方法、装置、系统、电子装置和存储介质
CN113505779B (zh) * 2021-07-30 2024-07-02 中国农业科学院都市农业研究所 采茶机器人采茶面超声波和视觉融合探测方法及装置
CN113808200B (zh) * 2021-08-03 2023-04-07 嘉洋智慧安全科技(北京)股份有限公司 一种检测目标对象移动速度的方法、装置及电子设备
CN114441499B (zh) * 2022-04-11 2022-07-12 天津美腾科技股份有限公司 品位检测方法及装置、识别设备、矿浆品位仪及存储介质
CN115157257B (zh) * 2022-07-22 2024-08-27 山东大学 基于uwb导航和视觉识别的智能植物管理机器人及系统
CN115401689B (zh) * 2022-08-01 2024-03-29 北京市商汤科技开发有限公司 基于单目相机的距离测量方法、装置以及计算机存储介质
CN115570574B (zh) * 2022-08-31 2024-04-30 华南理工大学 用于远程超声机器人的辅助遥控方法、系统、装置及介质
CN116600194B (zh) * 2023-05-05 2024-07-23 长沙妙趣新媒体技术有限公司 一种用于多镜头的切换控制方法及系统
CN116580828B (zh) * 2023-05-16 2024-04-02 深圳弗瑞奇科技有限公司 一种猫健康的全自动感应识别的可视化监测方法
CN116872217B (zh) * 2023-09-04 2023-11-17 深圳市普渡科技有限公司 机器人控制方法、装置、机器人和存储介质
CN116883398B (zh) * 2023-09-06 2023-11-28 湖南隆深氢能科技有限公司 基于电堆装配生产线的检测方法、系统、终端设备及介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5509847A (en) * 1990-01-09 1996-04-23 Kabushiki Kaisha Toshiba Control robot
EP0875341A1 (en) * 1997-04-28 1998-11-04 Seiko Seiki Kabushiki Kaisha Position and/or force controlling apparatus using sliding mode decoupling control
CN1994689A (zh) * 2005-12-28 2007-07-11 松下电器产业株式会社 机器人及机器人检测自动化方法
CN101195221A (zh) * 2006-12-07 2008-06-11 发那科株式会社 进行力控制的机器人控制装置
CN102566432A (zh) * 2012-01-17 2012-07-11 上海交通大学 基于Bang-bang控制策略的最优时间追踪捕获系统及其方法
CN103203755A (zh) * 2012-01-17 2013-07-17 精工爱普生株式会社 机器人控制装置、机器人系统以及机器人控制方法
CN103599631A (zh) * 2013-11-13 2014-02-26 中北大学 基于机器视觉的飞碟模拟训练系统及方法

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62197777A (ja) * 1986-02-25 1987-09-01 Nec Corp 衝突回避装置
JP2000101902A (ja) * 1998-09-18 2000-04-07 Toshiba Corp 監視カメラ制御装置
US6691947B2 (en) * 2002-03-12 2004-02-17 The Boeing Company Repetitive image targeting system
JP4875541B2 (ja) * 2006-08-28 2012-02-15 株式会社日本自動車部品総合研究所 方位検出方法、物体検出装置、プログラム
CN101939980B (zh) * 2008-02-06 2012-08-08 松下电器产业株式会社 电子摄像机和图像处理方法
JP5515671B2 (ja) * 2009-11-20 2014-06-11 ソニー株式会社 画像処理装置、その制御方法およびプログラム
US10187617B2 (en) * 2010-06-30 2019-01-22 Tata Consultancy Services Limited Automatic detection of moving object by using stereo vision technique
WO2012014430A1 (ja) * 2010-07-27 2012-02-02 パナソニック株式会社 移動体検出装置および移動体検出方法
CN102457712A (zh) * 2010-10-28 2012-05-16 鸿富锦精密工业(深圳)有限公司 可疑目标识别及追踪系统及方法
US9930298B2 (en) * 2011-04-19 2018-03-27 JoeBen Bevirt Tracking of dynamic object of interest and active stabilization of an autonomous airborne platform mounted camera
US8844896B2 (en) * 2011-06-07 2014-09-30 Flir Systems, Inc. Gimbal system with linear mount
KR101245057B1 (ko) * 2012-10-16 2013-03-18 (주)아이아이에스티 화재 감지 방법 및 장치
US10274287B2 (en) * 2013-05-09 2019-04-30 Shooting Simulator, Llc System and method for marksmanship training
US9340207B2 (en) * 2014-01-16 2016-05-17 Toyota Motor Engineering & Manufacturing North America, Inc. Lateral maneuver planner for automated driving system
WO2015149079A1 (en) * 2014-03-28 2015-10-01 Flir Systems, Inc. Gimbal system having preloaded isolation
US9922427B2 (en) * 2014-06-06 2018-03-20 Infineon Technologies Ag Time-of-flight camera with location sensor system
US9531928B2 (en) * 2014-07-08 2016-12-27 Flir Systems, Inc. Gimbal system with imbalance compensation
CN107168352B (zh) * 2014-07-30 2020-07-14 深圳市大疆创新科技有限公司 目标追踪系统及方法
JP6310093B2 (ja) * 2014-11-12 2018-04-11 エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd 目標物体の検出方法、検出装置及びロボット
JP2017072986A (ja) * 2015-10-07 2017-04-13 パナソニックIpマネジメント株式会社 自律飛行装置、自律飛行装置の制御方法及びプログラム
US10755419B2 (en) * 2017-01-30 2020-08-25 Nec Corporation Moving object detection apparatus, moving object detection method and program
WO2019087581A1 (ja) * 2017-11-06 2019-05-09 ソニー株式会社 情報処理装置と情報処理方法およびプログラム
JP2020076589A (ja) * 2018-11-06 2020-05-21 日本電産モビリティ株式会社 対象物検出装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5509847A (en) * 1990-01-09 1996-04-23 Kabushiki Kaisha Toshiba Control robot
EP0875341A1 (en) * 1997-04-28 1998-11-04 Seiko Seiki Kabushiki Kaisha Position and/or force controlling apparatus using sliding mode decoupling control
CN1994689A (zh) * 2005-12-28 2007-07-11 松下电器产业株式会社 机器人及机器人检测自动化方法
CN101195221A (zh) * 2006-12-07 2008-06-11 发那科株式会社 进行力控制的机器人控制装置
CN102566432A (zh) * 2012-01-17 2012-07-11 上海交通大学 基于Bang-bang控制策略的最优时间追踪捕获系统及其方法
CN103203755A (zh) * 2012-01-17 2013-07-17 精工爱普生株式会社 机器人控制装置、机器人系统以及机器人控制方法
CN103599631A (zh) * 2013-11-13 2014-02-26 中北大学 基于机器视觉的飞碟模拟训练系统及方法

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018027339A1 (en) * 2016-08-06 2018-02-15 SZ DJI Technology Co., Ltd. Copyright notice
US11148804B2 (en) 2016-08-06 2021-10-19 SZ DJI Technology Co., Ltd. System and method for tracking targets
US11906983B2 (en) 2016-08-06 2024-02-20 SZ DJI Technology Co., Ltd. System and method for tracking targets
CN113612967A (zh) * 2021-07-19 2021-11-05 深圳华跃云鹏科技有限公司 一种监控区域摄像头自组网系统
CN113612967B (zh) * 2021-07-19 2024-04-09 深圳华跃云鹏科技有限公司 一种监控区域摄像头自组网系统
WO2023051774A1 (zh) * 2021-10-01 2023-04-06 南宁市安普康商贸有限公司 监测方法、系统、装置及计算机程序产品
WO2023123325A1 (zh) * 2021-12-31 2023-07-06 华为技术有限公司 一种状态估计方法和装置

Also Published As

Publication number Publication date
US20170248971A1 (en) 2017-08-31
US10551854B2 (en) 2020-02-04
US20200159256A1 (en) 2020-05-21
CN105518702B (zh) 2018-06-26
CN105518702A (zh) 2016-04-20
JP2017512991A (ja) 2017-05-25
US11392146B2 (en) 2022-07-19
JP6310093B2 (ja) 2018-04-11

Similar Documents

Publication Publication Date Title
WO2016074169A1 (zh) 一种对目标物体的检测方法、检测装置以及机器人
WO2017008224A1 (zh) 一种移动物体的距离检测方法、装置及飞行器
WO2015194867A1 (ko) 다이렉트 트래킹을 이용하여 이동 로봇의 위치를 인식하기 위한 장치 및 그 방법
WO2018074903A1 (ko) 이동 로봇의 제어방법
WO2015194865A1 (ko) 검색 기반 상관 매칭을 이용하여 이동 로봇의 위치를 인식하기 위한 장치 및 그 방법
WO2015194866A1 (ko) 에지 기반 재조정을 이용하여 이동 로봇의 위치를 인식하기 위한 장치 및 그 방법
WO2019172725A1 (en) Method and apparatus for performing depth estimation of object
WO2015194864A1 (ko) 이동 로봇의 맵을 업데이트하기 위한 장치 및 그 방법
WO2011013862A1 (ko) 이동 로봇의 위치 인식 및 주행 제어 방법과 이를 이용한 이동 로봇
WO2015194868A1 (ko) 광각 카메라가 탑재된 이동 로봇의 주행을 제어하기 위한 장치 및 그 방법
WO2016200197A1 (ko) 사용자 기준 공간좌표계 상에서의 제스처 검출 방법 및 장치
WO2020230931A1 (ko) 다중 센서 및 인공지능에 기반하여 맵을 생성하고 노드들의 상관 관계를 설정하며 맵을 이용하여 주행하는 로봇 및 맵을 생성하는 방법
EP3763119A1 (en) Method for generating depth information and electronic device supporting the same
WO2018070844A1 (ko) 에지 모델링을 위한 에지 블러 설정 방법
WO2017188708A2 (ko) 이동 로봇, 복수의 이동 로봇 시스템 및 이동 로봇의 맵 학습방법
WO2022220414A1 (ko) 비동기 자연 표적 영상 계측데이터와 가속도 데이터의 융합에 기초한 구조물 변위 측정 방법 및 이를 위한 시스템
WO2020171605A1 (ko) 주행 정보 제공 방법, 차량맵 제공 서버 및 방법
WO2020080734A1 (ko) 얼굴 인식 방법 및 얼굴 인식 장치
WO2019245320A1 (ko) 이미지 센서와 복수의 지자기 센서를 융합하여 위치 보정하는 이동 로봇 장치 및 제어 방법
WO2018151504A1 (ko) 레이더를 이용하여 포인팅 위치를 인식하는 방법 및 장치
WO2015080498A1 (en) Method for detecting human body through depth information analysis and apparatus for analyzing depth information for user body detection
EP3562369A2 (en) Robot cleaner and method of controlling the same
WO2023224326A1 (ko) 깊이 정보를 획득하는 증강 현실 디바이스 및 그 동작 방법
WO2023063661A1 (ko) 학습 세트 생성 방법, 학습 세트 생성 장치, 및 학습 세트 생성 시스템
WO2021221333A1 (ko) 맵 정보과 영상 매칭을 통한 실시간 로봇 위치 예측 방법 및 로봇

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14905745

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2016558217

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14905745

Country of ref document: EP

Kind code of ref document: A1