CN111975776A - Robot movement tracking system and method based on deep learning and Kalman filtering - Google Patents

Robot movement tracking system and method based on deep learning and Kalman filtering Download PDF

Info

Publication number
CN111975776A
CN111975776A CN202010830399.1A CN202010830399A CN111975776A CN 111975776 A CN111975776 A CN 111975776A CN 202010830399 A CN202010830399 A CN 202010830399A CN 111975776 A CN111975776 A CN 111975776A
Authority
CN
China
Prior art keywords
tracking
robot
module
deep learning
kalman filtering
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010830399.1A
Other languages
Chinese (zh)
Inventor
袁进波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Unipower Technology Co ltd
Original Assignee
Guangzhou Unipower Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Unipower Technology Co ltd filed Critical Guangzhou Unipower Technology Co ltd
Priority to CN202010830399.1A priority Critical patent/CN111975776A/en
Publication of CN111975776A publication Critical patent/CN111975776A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/161Hardware, e.g. neural networks, fuzzy logic, interfaces, processor
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/12Target-seeking control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/277Analysis of motion involving stochastic approaches, e.g. using Kalman filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Artificial Intelligence (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Analysis (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention discloses a robot movement tracking system and method based on deep learning and Kalman filtering, wherein the system comprises: the control client module is used for selecting to start corresponding sensing equipment; the robot sensing module is used for starting corresponding sensing equipment and transmitting information acquired by the sensing equipment to the positioning and tracking module; the positioning and tracking module is used for loading a deep learning model according to a tracking scene and a tracked object, rapidly positioning and tracking the object by matching with a Kalman filtering algorithm, and synchronizing the processed data to the robot moving module and the control client module in real time; and the robot moving module is used for selecting a corresponding moving control method and tracking the object according to the real-time coordinates returned by the tracking and positioning module. By using the method and the device, the accuracy rate of identifying and tracking the target can be effectively improved. The robot movement tracking system and method based on deep learning and Kalman filtering can be widely applied to the field of robot tracking.

Description

Robot movement tracking system and method based on deep learning and Kalman filtering
Technical Field
The invention relates to the field of robot tracking, in particular to a robot movement tracking system and method based on deep learning and Kalman filtering.
Background
Currently, mobile robots are one of the research hotspots in the industry, and the development of mobile robots initially receives command movement and then automatically tracks the movement to provide services for specific personnel and specific service industries. The prior art has the following defects that the universality is low, the method is only suitable for a mobile human body or a specific object tracking technology, and the application range is small; secondly, the algorithm has low recognition rate and weak anti-interference capability, and is difficult to be applied to the actual production environment; and thirdly, the accuracy rate is lower, compared with deep learning, the traditional image processing method has lower accuracy rate in object identification and tracking and higher requirement on environment. Fourthly, the object recognition rate is low, the speed advantage is not provided for scenes with high real-time requirements, and the speed of moving objects in real scenes is usually high.
Disclosure of Invention
In order to solve the above technical problems, an object of the present invention is to provide a robot movement tracking system and method based on deep learning and kalman filtering, which can effectively improve the accuracy of identifying a tracked target, thereby improving the tracking rate.
The first technical scheme adopted by the invention is as follows: a robot movement tracking system based on deep learning and Kalman filtering comprises the following modules:
the control client module is used for receiving a user instruction, confirming a tracking scene and a tracking object and selecting to start corresponding sensing equipment according to the tracking scene and the tracking object;
the robot sensing module is used for receiving a control signal from the control client module, starting corresponding sensing equipment and transmitting information acquired by the sensing equipment to the positioning and tracking module;
the positioning and tracking module is used for receiving the information of the robot sensing module, loading different deep learning models according to a tracking scene and a tracked object, rapidly positioning and tracking the object by matching with a Kalman filtering algorithm, and synchronizing the processed data to the robot moving module and the control client module in real time;
and the robot moving module is used for matching the type of the robot, selecting a corresponding moving control method and tracking the object according to the real-time coordinates returned by the tracking and positioning module.
Furthermore, the control client module, the robot sensing module, the positioning and tracking module and the robot moving module are sequentially in wireless connection, and the robot moving module and the positioning and tracking module are also in wireless connection with the control client module.
Further, the control client module is constructed based on a web development framework and tornado technology of python.
Further, the robot sensing module comprises a laser radar sensing component, an infrared sensing component, an inertia measuring device and a binocular camera system.
Further, the types of robots include a wheel type mobile robot, a bipedal robot, and a quadruped robot.
The second technical scheme adopted by the invention is as follows: a robot movement tracking method based on deep learning and Kalman filtering comprises the following steps:
the robot sensing module starts sensing equipment according to an instruction of the control client and acquires a current input image;
extracting a feature vector of a current input image and obtaining tracking object information according to the feature vector;
the method comprises the steps of taking information of a tracked object as input data of a Kalman filtering algorithm to obtain the real-time position of the tracked object;
and feeding back the information of the tracked object and the real-time position of the tracked object to a robot movement control module and controlling the robot to move and track.
Further, the positions of a plurality of similar tracking objects appearing in the scene are judged, and feature vectors of all the objects are obtained through a visual system and the similarity of the feature vectors is calculated based on a deep learning method, so that the feature vectors with the optimal similarity and the corresponding coordinate positions are obtained.
Further, the step of extracting a feature vector of the current input image and obtaining the tracked object information according to the feature vector specifically further includes:
processing the input image by a DarkNet-53 network structure to obtain a characteristic vector of input data;
and (4) passing the feature vector through a yolk network, performing 1-1 convolution processing and a loss function, and outputting tracking object information.
Further, the tracked object information includes a tracked object category, a center coordinate of the tracked object in the current field of view, a height, a width, and a scale.
Further, the step of obtaining the real-time position of the tracked object by using the information of the tracked object as the input data of the kalman filter algorithm specifically includes:
obtaining coordinate data of the current view of the tracked object according to the information of the tracked object and inputting the coordinate data into a Kalman filtering algorithm to obtain a state vector of the tracked object;
and obtaining the real-time position of the tracked object according to the state vector and the observation equation of the tracked object.
The system and the method have the advantages that: the problem of robot body tracking is solved by controlling the common association of the client module, the robot sensing module, the positioning tracking module and the robot moving module, the accuracy of identifying and tracking targets can be effectively improved, the tracking speed is improved, and the defects of small application range, high tracking accuracy and low speed of the mobile robot are overcome.
Drawings
FIG. 1 is a block diagram of a robot movement tracking system based on deep learning and Kalman filtering;
fig. 2 is a flow chart of steps of a robot movement tracking method based on deep learning and kalman filtering.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
The invention provides a robot movement tracking system and method based on deep learning and Kalman filtering.A robot operator selects and confirms a tracking scene and an object by controlling a client module, selects and starts a corresponding robot sensing module according to the tracking scene, selects an object to be tracked according to a current picture returned by the robot sensing module, and the robot control client has a preset universal tracking object, such as pedestrians, vehicles, animals and the like. After the tracking target is determined, the robot starts a tracking and positioning system, and the tracking distance can be set within the range of 1-10 m.
As shown in FIG. 1, the invention provides a robot movement tracking system based on deep learning and Kalman filtering, which comprises the following modules:
the control client module is used for receiving a user instruction, confirming a tracking scene and a tracking object and selecting to start corresponding sensing equipment according to the tracking scene and the tracking object;
specifically, the control client module adopts a web development framework tornado technology of python, is wirelessly connected to the robot perception module through the robot, and is used for controlling the opening of a binocular camera vision system of the robot, displaying the current vision scene of the robot and confirming the target to be tracked in the scene.
The robot sensing module is used for receiving a control signal from the control client module, starting corresponding sensing equipment and transmitting information acquired by the sensing equipment to the positioning and tracking module;
specifically, the robot sensing module comprises different sensing sensors, different sensors can be applied according to different scenes, and a user can independently load and start the sensors in the control client module and can also be matched for use. The method comprises the following steps: laser radar, IMU, binocular camera system, etc. The multiple sensing modules are matched for use to improve the positioning and tracking precision and speed, and the data acquired through the sensing system is sent to the positioning and tracking module.
The positioning and tracking module is used for receiving the information of the robot sensing module, loading different deep learning models according to a tracking scene and a tracked object, rapidly positioning and tracking the object by matching with a Kalman filtering algorithm, and synchronizing the processed data to the robot moving module and the control client module in real time;
specifically, the positioning and tracking module integrates different deep learning models such as MTCNN and Yolo v3, different deep learning models can be automatically loaded according to different tracking targets and scenes confirmed by the control client module, and the marked object can be quickly positioned and tracked with high precision by matching with a Kalman filtering algorithm. And synchronizing the processed data to the robot movement module and the robot control system interface in real time.
And the robot moving module is used for matching the type of the robot, selecting a corresponding moving control method and tracking the object according to the real-time coordinates returned by the tracking and positioning module.
Specifically, the robot moving module may perform matching control according to different robot types, for example, a wheel type moving robot, a biped, a quadruped robot, and the like. According to the real-time tracking object coordinate returned by the positioning and tracking module, the robot can move according to the movement of the tracking object coordinate.
Further as a preferred embodiment of the system, the control client module, the robot sensing module, the positioning and tracking module and the robot moving module are sequentially and wirelessly connected, and the robot moving module and the positioning and tracking module are also wirelessly connected with the control client module.
Specifically, the user opens the client side module and starts the robot, the robot is wirelessly connected to the robot system, the user interface displays the connection state, and if the connection to the robot cannot be matched, whether the robot is opened or not is confirmed. The user can select a tracking scene on a user interface as required, select a specific tracking target and confirm to start the corresponding sensing equipment. The robot sensing module such as a binocular camera, a laser radar, an IMU and the like sends detected data to the positioning and tracking module in real time, and based on an algorithm combining deep learning and Kalman filtering, data such as type information, characteristic information, coordinate information and the like of a tracked target are calculated and fed back to the robot moving module and the real-time user interface. .
Further as a preferred embodiment of the present system, the control client module is constructed based on the web development framework of python and tornado technology.
Further as a preferred embodiment of the system, the robot sensing module comprises a laser radar sensing component, an infrared sensing component, an inertial measurement device and a binocular camera system.
Further as a preferred embodiment of the present system, the types of robots include wheeled mobile robots, bipedal robots, and quadruped robots.
As shown in fig. 2, a robot movement tracking method based on deep learning and kalman filtering includes the following steps:
the robot sensing module starts sensing equipment according to an instruction of the control client and acquires a current input image;
specifically, the robot perception system acquires an in-plane input image through a camera, acquires the image at a speed of 8 frames/second, and the image preprocessing input size is 416 × 416.
Extracting a feature vector of a current input image and obtaining tracking object information according to the feature vector;
the method comprises the steps of taking information of a tracked object as input data of a Kalman filtering algorithm to obtain the real-time position of the tracked object;
and feeding back the information of the tracked object and the real-time position of the tracked object to a robot movement control module and controlling the robot to move and track.
Further, as a preferred embodiment of the method, the positions of a plurality of similar tracking objects appearing in the scene are judged, and based on a deep learning method, the feature vector of each object is obtained through a visual system and the similarity of the feature vectors is calculated, so that the feature vector with the optimal similarity and the corresponding coordinate position are obtained.
As a preferred embodiment of the method, the step of extracting a feature vector of the current input image and obtaining the tracked object information according to the feature vector specifically further includes:
processing the input image by a DarkNet-53 network structure to obtain a characteristic vector of input data;
and (4) passing the feature vector through a yolk network, performing 1-1 convolution processing and a loss function, and outputting tracking object information.
Further, the tracked object information includes a tracked object category, a center coordinate of the tracked object in the current field of view, a height, a width, and a scale.
Further, as a preferred embodiment of the method, the step of obtaining the real-time position of the tracked object by using the information of the tracked object as the input data of the kalman filter algorithm specifically includes:
obtaining coordinate data of the current view of the tracked object according to the information of the tracked object and inputting the coordinate data into a Kalman filtering algorithm to obtain a state vector of the tracked object;
and obtaining the real-time position of the tracked object according to the state vector and the observation equation of the tracked object.
Specifically, the position vector of the obtained coordinate data and the noise covariance matrix of the observation vector are calculated in real time, the position coordinate data is used as input data of a Kalman filtering algorithm, and a predicted value is calculated by an optimal estimated value of a previous frame; and obtaining the state vector of the tracked object by the observed value and the predicted value through a Kalman filtering algorithm again, and then respectively substituting the state vectors into an observation equation to obtain the position of the tracked object.
The contents in the system embodiments are all applicable to the method embodiments, the functions specifically realized by the method embodiments are the same as the system embodiments, and the beneficial effects achieved by the method embodiments are also the same as the beneficial effects achieved by the system embodiments.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A robot movement tracking system based on deep learning and Kalman filtering is characterized by comprising the following modules:
the control client module is used for receiving a user instruction, confirming a tracking scene and a tracking object and selecting to start corresponding sensing equipment according to the tracking scene and the tracking object;
the robot sensing module is used for receiving a control signal from the control client module, starting corresponding sensing equipment and transmitting information acquired by the sensing equipment to the positioning and tracking module;
the positioning and tracking module is used for receiving the information of the robot sensing module, loading different deep learning models according to a tracking scene and a tracked object, rapidly positioning and tracking the object by matching with a Kalman filtering algorithm, and synchronizing the processed data to the robot moving module and the control client module in real time;
and the robot moving module is used for matching the type of the robot, selecting a corresponding moving control method and tracking the object according to the real-time coordinate returned by the tracking and positioning module.
2. The robot mobile tracking system based on deep learning and Kalman filtering according to claim 1, characterized in that the control client module, the robot sensing module, the positioning and tracking module and the robot mobile module are sequentially and wirelessly connected, and the robot mobile module and the positioning and tracking module are further wirelessly connected with the control client module.
3. The deep learning and Kalman filtering based robot movement tracking system of claim 2, characterized in that the control client module is constructed based on python's web development framework and tornado technology.
4. The robot movement tracking system based on deep learning and Kalman filtering as claimed in claim 3, characterized in that the robot sensing module comprises a laser radar sensing component, an infrared sensing component, an inertial measurement unit and a binocular camera system.
5. The deep learning and Kalman filtering based robot movement tracking system of claim 4, characterized in that the types of robots comprise wheeled mobile robots, bipedal robots and quadruped robots.
6. A robot movement tracking method based on deep learning and Kalman filtering is characterized by comprising the following steps:
the robot sensing module starts sensing equipment according to an instruction of the control client and acquires a current input image;
extracting a feature vector of a current input image and obtaining tracking object information according to the feature vector;
the method comprises the steps of taking information of a tracked object as input data of a Kalman filtering algorithm to obtain the real-time position of the tracked object;
and feeding back the information of the tracked object and the real-time position of the tracked object to a robot movement control module and controlling the robot to move and track.
7. The robot movement tracking method based on deep learning and Kalman filtering according to claim 6 is characterized in that the positions of a plurality of similar tracking objects appearing in a scene are judged, and based on the deep learning method, a feature vector of each object is obtained through a vision system and the similarity of the feature vectors is calculated to obtain the feature vector with the optimal similarity and the corresponding coordinate position.
8. The method for tracking the movement of the robot based on the deep learning and kalman filtering algorithms according to claim 7, wherein the step of extracting the feature vector of the current input image and obtaining the information of the tracked object according to the feature vector further comprises:
processing the input image by a DarkNet-53 network structure to obtain a characteristic vector of input data;
and (4) passing the feature vector through a yolk network, performing 1-1 convolution processing and a loss function, and outputting tracking object information.
9. The method for tracking the movement of the robot based on the deep learning and the Kalman filtering as claimed in claim 8, wherein the tracked object information comprises a tracked object category, a center coordinate of the tracked object in the current visual field, a height, a width and a scaling.
10. The method for tracking the movement of the robot based on the deep learning and the kalman filter according to claim 9, wherein the step of obtaining the real-time position of the tracked object by using the information of the tracked object as the input data of the kalman filter algorithm specifically comprises:
obtaining coordinate data of the current view of the tracked object according to the information of the tracked object and inputting the coordinate data into a Kalman filtering algorithm to obtain a state vector of the tracked object;
and obtaining the real-time position of the tracked object according to the state vector and the observation equation of the tracked object.
CN202010830399.1A 2020-08-18 2020-08-18 Robot movement tracking system and method based on deep learning and Kalman filtering Pending CN111975776A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010830399.1A CN111975776A (en) 2020-08-18 2020-08-18 Robot movement tracking system and method based on deep learning and Kalman filtering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010830399.1A CN111975776A (en) 2020-08-18 2020-08-18 Robot movement tracking system and method based on deep learning and Kalman filtering

Publications (1)

Publication Number Publication Date
CN111975776A true CN111975776A (en) 2020-11-24

Family

ID=73435749

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010830399.1A Pending CN111975776A (en) 2020-08-18 2020-08-18 Robot movement tracking system and method based on deep learning and Kalman filtering

Country Status (1)

Country Link
CN (1) CN111975776A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112731973A (en) * 2020-12-02 2021-04-30 南京理工大学北方研究院 Robot tracking system
CN117097918A (en) * 2023-10-19 2023-11-21 奥视(天津)科技有限公司 Live broadcast display device and control method thereof

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070086621A1 (en) * 2004-10-13 2007-04-19 Manoj Aggarwal Flexible layer tracking with weak online appearance model
DE102016008414A1 (en) * 2016-07-13 2018-01-18 Mbda Deutschland Gmbh Method and device for controlling the positional position of a platform provided with a target tracking device, pivotable about three spatial axes
CN107909600A (en) * 2017-11-04 2018-04-13 南京奇蛙智能科技有限公司 The unmanned plane real time kinematics target classification and detection method of a kind of view-based access control model
CN108154110A (en) * 2017-12-22 2018-06-12 任俊芬 A kind of intensive people flow amount statistical method based on the detection of the deep learning number of people
CN108536156A (en) * 2018-03-21 2018-09-14 深圳臻迪信息技术有限公司 Target Tracking System and method for tracking target
CN110991397A (en) * 2019-12-17 2020-04-10 深圳市捷顺科技实业股份有限公司 Traveling direction determining method and related equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070086621A1 (en) * 2004-10-13 2007-04-19 Manoj Aggarwal Flexible layer tracking with weak online appearance model
DE102016008414A1 (en) * 2016-07-13 2018-01-18 Mbda Deutschland Gmbh Method and device for controlling the positional position of a platform provided with a target tracking device, pivotable about three spatial axes
CN107909600A (en) * 2017-11-04 2018-04-13 南京奇蛙智能科技有限公司 The unmanned plane real time kinematics target classification and detection method of a kind of view-based access control model
CN108154110A (en) * 2017-12-22 2018-06-12 任俊芬 A kind of intensive people flow amount statistical method based on the detection of the deep learning number of people
CN108536156A (en) * 2018-03-21 2018-09-14 深圳臻迪信息技术有限公司 Target Tracking System and method for tracking target
CN110991397A (en) * 2019-12-17 2020-04-10 深圳市捷顺科技实业股份有限公司 Traveling direction determining method and related equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
蔡成涛、苏丽、梁燕华著: "《基于视觉的海洋浮标目标探测技术》", 30 November 2019 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112731973A (en) * 2020-12-02 2021-04-30 南京理工大学北方研究院 Robot tracking system
CN117097918A (en) * 2023-10-19 2023-11-21 奥视(天津)科技有限公司 Live broadcast display device and control method thereof
CN117097918B (en) * 2023-10-19 2024-01-09 奥视(天津)科技有限公司 Live broadcast display device and control method thereof

Similar Documents

Publication Publication Date Title
CN111496770B (en) Intelligent carrying mechanical arm system based on 3D vision and deep learning and use method
CN110948492B (en) Three-dimensional grabbing platform and grabbing method based on deep learning
CN111055281B (en) ROS-based autonomous mobile grabbing system and method
US20230042756A1 (en) Autonomous mobile grabbing method for mechanical arm based on visual-haptic fusion under complex illumination condition
KR20210020945A (en) Vehicle tracking in warehouse environments
CN112518748B (en) Automatic grabbing method and system for visual mechanical arm for moving object
CN103895042A (en) Industrial robot workpiece positioning grabbing method and system based on visual guidance
CN110560373B (en) Multi-robot cooperation sorting and transporting method and system
EP1477934A2 (en) Image processing apparatus
US20200316780A1 (en) Systems, devices, articles, and methods for calibration of rangefinders and robots
CN111624994A (en) Robot inspection method based on 5G communication
CN110744544B (en) Service robot vision grabbing method and service robot
CN111975776A (en) Robot movement tracking system and method based on deep learning and Kalman filtering
CN114089735A (en) Method and device for adjusting shelf pose of movable robot
CN115552348A (en) Moving object following method, robot, and computer-readable storage medium
Zhou et al. 3d pose estimation of robot arm with rgb images based on deep learning
JP2004338889A (en) Image recognition device
CN109079777A (en) A kind of mechanical arm hand eye coordination operating system
KR20180066668A (en) Apparatus and method constructing driving environment of unmanned vehicle
CN114187312A (en) Target object grabbing method, device, system, storage medium and equipment
CN116175582A (en) Intelligent mechanical arm control system and control method based on machine vision
Schnaubelt et al. Autonomous assistance for versatile grasping with rescue robots
Kebir et al. Smart robot navigation using rgb-d camera
Zhou et al. Visual servo control system of 2-DOF parallel robot
Hajjawi et al. Cooperative visual team working and target tracking of mobile robots

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201124

RJ01 Rejection of invention patent application after publication