CN106982340B - Method and system for storing target video under machine vision tracking - Google Patents

Method and system for storing target video under machine vision tracking Download PDF

Info

Publication number
CN106982340B
CN106982340B CN201611056445.7A CN201611056445A CN106982340B CN 106982340 B CN106982340 B CN 106982340B CN 201611056445 A CN201611056445 A CN 201611056445A CN 106982340 B CN106982340 B CN 106982340B
Authority
CN
China
Prior art keywords
target object
video
delta
target
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201611056445.7A
Other languages
Chinese (zh)
Other versions
CN106982340A (en
Inventor
李丽丽
王亚梅
徐利华
曹永军
余学文
余华明
王斯炎
李红霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shunde Polytechnic
South China Robotics Innovation Research Institute
Original Assignee
Shunde Polytechnic
South China Robotics Innovation Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shunde Polytechnic, South China Robotics Innovation Research Institute filed Critical Shunde Polytechnic
Priority to CN201611056445.7A priority Critical patent/CN106982340B/en
Publication of CN106982340A publication Critical patent/CN106982340A/en
Application granted granted Critical
Publication of CN106982340B publication Critical patent/CN106982340B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Manipulator (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method and a system for storing target videos under machine vision tracking.A robot control system comprises a motion control system, a vision system, a control encoder and a video memory, wherein the motion control system is connected with the vision system, a coding controller and the video memory, the motion control system is connected with at least two DELTA robots, and the motion control system is connected with the at least two DELTA robots in series through an EtherCAT bus. By the embodiment of the invention, the next state value of the target object is followed in time, the video coding process of the target object is completed, the disorder of the whole video tracking object is avoided, the ordered storage of the video is ensured, and the whole operation process can be completely reproduced according to the time sequence process.

Description

Method and system for storing target video under machine vision tracking
Technical Field
The invention relates to the technical field of intelligent manufacturing, in particular to a method and a system for storing target videos under machine vision tracking.
Background
With the continuous development of robotics, more and more robots are beginning to perform various tasks instead of humans. Robots are a colloquial term for automatic control machines (Robot), which include all machines that simulate human behavior or thought and simulate other creatures (e.g., machine dogs, machine cats, etc.). There are many taxonomies and controversy to define robots in a narrow sense, and some computer programs are even referred to as robots. In the modern industry, robots refer to artificial machines that automatically perform tasks to replace or assist human work. The ideal high-simulation robot is a product of advanced integrated control theory, mechano-electronics, computer and artificial intelligence, materials science and bionics, and is researched and developed in the current scientific community in the direction, but the robot is still imperfectly controlled remotely, the application of big data is not popularized yet, the data acquisition of the robot is still in an off-line state, and the deep learning of the robot is also from the storage of local data.
The DELTA robot is one of the parallel robots, can realize the spatial four-dimensional motion of an end effector by driving 3 parallelogram branched chains through an outer rotating pair and adding a middle rotating driving shaft, and can be widely applied to sorting and packaging in the fields of electronics, light industry, food, medicine and the like after being configured with a visual system.
At present, the control system of the DELTA robot of each manufacturer is constructed by adopting a special robot controller of the DELTA robot, and the control system adopting the special robot controller has poor universality. A general DELTA robot control system is composed of a motion control system, a servo system, a visual tracking system and a conveyor belt tracking system, as shown in fig. 1, because a plurality of DELTA robots cooperatively work, a target object completes different operation processes in different time under different DELTA robots, how to realize video tracking, and realizing data storage or coordination is a relatively large amount of work, and video storage for a target under visual tracking also needs to be overcome.
Disclosure of Invention
The invention provides a method and a system for storing a target video under machine vision tracking, which finish the process of collecting and storing video data by establishing the vision tracking of a target object under a coordination working mode and solve the problem of the vision coordination of the existing machine.
The invention provides a method for storing a target video under machine vision tracking, which comprises the following steps: the robot control system comprises a motion control system, a vision system, a control encoder and a video memory, wherein the motion control system is connected with the vision system, the encoding controller and the video memory, at least two DELTA robots are connected with the motion control system, and the motion control system and the at least two DELTA robots are connected in series through an EtherCAT bus, and the method comprises the following steps:
in the process that the motion control system controls the operation of the DELTA robots, the vision system constructs a target tracking model under each DELTA robot and collects and processes target object images;
detecting and identifying the operating state of each DELTA robot by using an image processing method to obtain a target object in the operating process of each DELTA robot;
accurately positioning the distance between the DELTA robot and the target object by using a distance measuring method, and predicting the state of the target object;
the vision system tracks the next state value of the target object in time according to the prediction;
the coding controller carries out coding processing on the video taking different numbers of the DELTA robots as target objects;
and storing the video of the target object after the encoding processing in a video memory according to the time of the target object operation.
The vision system constructs a target tracking model under each DELTA robot, and the acquisition and processing of the target object image comprise:
and establishing a target tracking model based on a tracking algorithm of the working background of the DELTA robot in combination with color and edges.
The method for detecting and identifying the operation state of each DELTA robot by using image processing comprises the following steps:
and detecting and identifying the target object in the operating state of each DELTA robot by using an inter-frame difference method.
The method for accurately positioning the distance between the DELTA robot and the target object by using the ranging method and predicting the state of the target object comprises the following steps:
calculating the distance between the DELTA robot and the target object by adopting a monocular vision distance measuring method;
and predicting the motion state of the target object state based on a Kalman filtering algorithm.
The vision system tracking the next state value of the target object in time according to the prediction comprises:
and tracking the target object in real time based on the cooperative control instruction between the at least two DELTA robots, and capturing state values under different DELTA robots based on a visual system.
The control encoder performs encoding processing according to videos with different DELTA robot numbers as target objects, and comprises the following steps:
acquiring video image frames of a target object under different DELTA robots;
a corresponding DELTA robot identifier is written on each video image frame.
The storing the video of the target object after the encoding processing in the video memory according to the time of the target object operation comprises:
acquiring a video of a target object under each DELTA robot;
analyzing the video occurrence time of each DELTA robot;
generating a complete video file corresponding to the target object based on the video generation time;
and storing the complete video file in a video memory.
Correspondingly, the invention also provides a robot control system, which comprises: the motion control system is connected with the vision system, the coding controller and the video memory, at least two DELTA robots are connected with the motion control system, the motion control system and the at least two DELTA robots are connected in series through an EtherCAT bus, and the motion control system and the at least two DELTA robots are connected in series through the EtherCAT bus, wherein:
the motion control system is used for controlling the target object to perform coordination operation under at least two DELTA robots;
the visual system is used for constructing a target tracking model under each DELTA robot and acquiring and processing a target object image; detecting and identifying the operating state of each DELTA robot by using an image processing method to obtain a target object in the operating process of each DELTA robot; accurately positioning the distance between the DELTA robot and the target object by using a ranging method, and predicting the state of the target object; the vision system tracks the next state value of the target object in time according to the prediction;
the encoding controller is used for carrying out encoding processing on the video taking the serial numbers of the different DELTA robots as target objects;
and the video memory is used for storing the video of the target object after the encoding processing in the video memory according to the time of the target object operation.
The visual system establishes a target tracking model based on a tracking algorithm of a working background of the DELTA robot in combination with color and edges; detecting and identifying the target object in the operating state of each DELTA robot by using an inter-frame difference method; calculating the distance between the DELTA robot and the target object by adopting a monocular vision distance measuring method; predicting the motion state of the target object state based on a Kalman filtering algorithm; and tracking the target object in real time based on the cooperative control instruction between the at least two DELTA robots, and capturing state values under different DELTA robots based on a visual system.
The encoding controller acquires video image frames of a target object under different DELTA robots; writing a corresponding DELTA robot identifier on each video image frame;
the video memory acquires a video of a target object under each DELTA robot; analyzing the video occurrence time of each DELTA robot; generating a complete video file corresponding to the target object based on the video generation time; and storing the complete video file in a video memory.
In the invention, the video data acquisition and storage process is completed by establishing the visual tracking of the target object in the coordinated working mode, the video tracking image of the target object in the coordinated working mode under a plurality of DELTA robots is realized based on a visual system, the target object can be predicted in time only by a distance measurement method, the next state value of the target object can be followed in time, the video coding process of the target object is completed, the disorder of the whole video tracking object is avoided, the ordered storage of videos is ensured, and the whole operation process can be completely reproduced according to the time sequence process.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic configuration diagram of a robot control system in the prior art;
FIG. 2 is a schematic diagram of a robot control system according to an embodiment of the present invention;
fig. 3 is a flowchart of a method for storing a target video under machine vision tracking in an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Specifically, fig. 2 shows a schematic structural diagram of a robot control system in an embodiment of the present invention, where the robot control system includes a motion control system, a vision system, a control encoder, and a video memory, the motion control system is connected to the vision system, the encoding controller, and the video memory, and the motion control system is connected to at least two DELTA robots, and the motion control system and the at least two DELTA robots are connected in series through an EtherCAT bus, where three DELTA robots are shown in the embodiment of the present invention, where:
the motion control system is used for controlling the target object to perform coordination operation under at least two DELTA robots;
the vision system is used for constructing a target tracking model under each DELTA robot and collecting and processing a target object image; detecting and identifying the operating state of each DELTA robot by using an image processing method to obtain a target object in the operating process of each DELTA robot; accurately positioning the distance between the DELTA robot and the target object by using a ranging method, and predicting the state of the target object; the vision system tracks the next state value of the target object in time according to the prediction;
the encoding controller is used for carrying out encoding processing on the video which is taken as the target object according to the serial numbers of different DELTA robots;
and the video memory is used for storing the video of the target object after the encoding processing in the video memory according to the time of the target object operation.
In the specific implementation process, the visual system establishes a target tracking model based on a tracking algorithm of a working background of a DELTA robot in combination with color and edges; detecting and identifying the target object in the operating state of each DELTA robot by using an inter-frame difference method; calculating the distance between the DELTA robot and the target object by adopting a monocular vision distance measuring method; predicting the motion state of the target object state based on a Kalman filtering algorithm; and tracking the target object in real time based on the cooperative control instruction between the at least two DELTA robots, and capturing state values under different DELTA robots based on a visual system.
It should be noted that, in the embodiment of the present invention, a target tracking model is established by combining color and edge tracking algorithms, some color edge detection algorithms may be adopted, the edge characteristics of a color image are fully considered, and color difference is used for tracking, so that the defect of edge loss in the detection of the conventional edge detection method is overcome, more color edge information can be extracted, the detection accuracy and the detection effect are satisfactory, and certain practical value and good processing effect are achieved.
It should be noted that the interframe difference method is a method for obtaining the contour of a moving object by performing difference operation on two adjacent frames in a video image sequence, and can be well applied to the situation that a plurality of moving objects and a camera move. When abnormal object motion occurs in a monitored scene, a frame is obviously different from a frame, the two frames are subtracted to obtain an absolute value of the brightness difference of the two frames, whether the absolute value is greater than a threshold value or not is judged to analyze the motion characteristic of a video or an image sequence, and whether object motion exists in the image sequence or not is determined. The difference between the image sequences frame by frame corresponds to performing high-pass filtering on the image sequences in the time domain.
It should be noted that the monocular vision distance measurement method can obtain the mapping relationship between the imaging point and the target point by using the pinhole imaging principle, and establish a pinhole imaging model. Then, analyzing the target image to obtain the area mapping relation between the target object and the target image, and establishing a linear distance measurement model for visual measurement; through image processing, extracting characteristic points of a target image, converting the distance relation between the optical center and the target object into the distance relation between the optical center and the characteristic points, and providing a monocular vision distance measuring principle based on the characteristic points.
It should be noted that the kalman filtering algorithm is an algorithm that utilizes a linear system state equation, outputs observation data through system input, and performs optimal estimation on the system state, thereby implementing a prediction process of a data value.
In a specific implementation process, the coding controller acquires video image frames of a target object under different DELTA robots; a corresponding DELTA robot identifier is written on each video image frame.
In a specific implementation process, the video memory acquires a video of a target object under each DELTA robot; analyzing the video occurrence time of each DELTA robot; generating a complete video file corresponding to the target object based on the video generation time; and storing the complete video file in a video memory.
Therefore, in the whole cooperative operation process, the video tracking of the target object can be realized, the automatic coding process can be realized, and the video file storage process can be realized.
Accordingly, fig. 3 shows a flowchart of a method for storing a target video under machine vision tracking in an embodiment of the present invention, where: the robot control system comprises a motion control system, a vision system, a control encoder and a video memory, wherein the motion control system is connected with the vision system, the encoding controller and the video memory, the motion control system is connected with at least two DELTA robots, and the motion control system and the at least two DELTA robots are connected in series through an EtherCAT bus, and the method comprises the following steps:
s301, in the process that the motion control system controls the operation of the DELTA robots, a target tracking model under each DELTA robot is constructed by a vision system, and target object images are collected and processed;
in the specific implementation process, a target tracking model is established by a tracking algorithm based on the working background of the DELTA robot and the combination of colors and edges. In the embodiment of the invention, the target tracking model is established by combining the color and edge tracking algorithms, a plurality of color edge detection algorithms can be adopted, the edge characteristics of the color image are fully considered, and the color difference is used for tracking, so that the defect that the edge is lost in the detection of the traditional edge detection method is overcome, more color edge information can be extracted, the detection precision and the detection effect are satisfactory, and the method has certain practical value and good processing effect.
S302, detecting and identifying the operation state of each DELTA robot by using an image processing method to obtain a target object in the operation process of each DELTA robot;
in the specific implementation process, the target object under the operating state of each DELTA robot is detected and identified by using an inter-frame difference method. The interframe difference method is a method for obtaining the contour of a moving target by carrying out difference operation on two adjacent frames in a video image sequence, and can be well suitable for the condition that a plurality of moving targets exist and a camera moves. When abnormal object motion occurs in a monitored scene, a frame is obviously different from a frame, the two frames are subtracted to obtain an absolute value of the brightness difference of the two frames, whether the absolute value is greater than a threshold value or not is judged to analyze the motion characteristic of a video or an image sequence, and whether object motion exists in the image sequence or not is determined. The difference of the image sequence from frame to frame is equivalent to performing high-pass filtering on the image sequence in a time domain. The Kalman filtering algorithm is an algorithm for optimally estimating the system state by using a linear system state equation and inputting and outputting observation data through the system, so that the prediction process of a data value is realized.
S303, accurately positioning the distance between the DELTA robot and the target object by using a ranging method, and predicting the state of the target object;
in a specific implementation process, a monocular vision distance measuring method is adopted to calculate the distance between the DELTA robot and a target object; and predicting the motion state of the target object state based on a Kalman filtering algorithm.
The monocular vision distance measurement method can obtain the mapping relation between the imaging point and the target point by utilizing the pinhole imaging principle, and establish a pinhole imaging model. Then, analyzing the target image to obtain the area mapping relation between the target object and the target image, and establishing a linear distance measurement model for visual measurement; through image processing, extracting characteristic points of a target image, converting the distance relation between the optical center and the target object into the distance relation between the optical center and the characteristic points, and providing a monocular vision distance measuring principle based on the characteristic points.
S304, the vision system tracks the next state value of the target object in time according to the prediction;
in the specific implementation process, the target object is tracked in real time based on the cooperative control instruction between the at least two DELTA robots, and the state values of the different DELTA robots are captured based on the visual system.
S305, the coding controller carries out coding processing on the video taking different DELTA robot numbers as target objects;
due to the fact that cooperative work is achieved for different DELTA robots, tracking videos are generally shot in a multi-camera mode in a cooperative mode, and in the process, encoding operation needs to be conducted on video objects under each DELTA robot, so that a certain identification effect is achieved.
In a specific implementation process, a coding controller acquires video image frames of a target object under different DELTA robots; a corresponding DELTA robot identifier is written on each video image frame.
S306, storing the video of the target object after the encoding processing in a video memory according to the time of the target object operation.
In the specific implementation process, a video of a target object under each DELTA robot is obtained; analyzing the video occurrence time of each DELTA robot; generating a complete video file corresponding to the target object based on the video generation time; and storing the complete video file in a video memory.
The target object is combined into a plurality of different videos under the work of different DELTA robots, and then an operation object workflow for completing time sequence needs to be formed, a process of video re-identification and re-combination needs to be realized, and finally a video storage process is completed.
In conclusion, the video data acquisition and storage process is completed by establishing the visual tracking of the target object in the coordinated working mode, the video tracking image of the target object in the coordinated working mode under a plurality of DELTA robots is realized based on the visual system, the target object can be predicted in time only by the distance measurement method, the next state value of the target object can be followed in time, the video coding process of the target object is completed, the disorder of the whole video tracking object is avoided, the ordered storage of videos is ensured, and the whole operation process can be completely reproduced according to the time sequence process.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by hardware that is related to instructions of a program, and the program may be stored in a computer-readable storage medium, and the storage medium may include: read Only Memory (ROM), random Access Memory (RAM), magnetic or optical disks, and the like.
The method and system for storing target video under machine vision tracking provided by the embodiment of the invention are described in detail above, and specific examples are applied herein to explain the principle and the embodiment of the invention, and the description of the above embodiments is only used to help understanding the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A method for storing target video under machine vision tracking is characterized in that: the robot control system comprises a motion control system, a vision system, a control encoder and a video memory, wherein the motion control system is connected with the vision system, the encoding controller and the video memory, at least two DELTA robots are connected with the motion control system, and the motion control system and the at least two DELTA robots are connected in series through an EtherCAT bus, and the method comprises the following steps:
in the process that the motion control system controls the operation of the DELTA robots, the vision system constructs a target tracking model under each DELTA robot and collects and processes target object images;
detecting and identifying the operating state of each DELTA robot by using an image processing method to obtain a target object in the operating process of each DELTA robot;
accurately positioning the distance between the DELTA robot and the target object by using a distance measuring method, and predicting the state of the target object;
the vision system tracks the next state value of the target object in time according to the prediction;
the coding controller carries out coding processing on the video taking different numbers of the DELTA robots as target objects;
and storing the video of the target object after the encoding processing in a video memory according to the time of the target object operation.
2. The method for storing the target video under the machine vision tracking as claimed in claim 1, wherein the vision system constructs a target tracking model under each DELTA robot, and the acquiring and processing the target object image comprises:
and establishing a target tracking model based on a tracking algorithm of the working background of the DELTA robot in combination with color and edges.
3. The method for video storage of a target under machine vision tracking as claimed in claim 1, wherein said detecting and identifying the operation status of each DELTA robot using image processing method comprises:
and detecting and identifying the target object in the operating state of each DELTA robot by using an inter-frame difference method.
4. The method for video storage of a target under machine vision tracking as claimed in claim 1, wherein the accurately locating the distance between the DELTA robot and the target object by using the ranging method and predicting the state of the target object comprises:
calculating the distance between the DELTA robot and the target object by adopting a monocular vision distance measuring method;
and predicting the motion state of the target object state based on a Kalman filtering algorithm.
5. The method for target video storage under machine vision tracking according to any one of claims 1 to 4, wherein the vision system tracking the next state value of the target object in time according to prediction comprises:
and tracking the target object in real time based on the cooperative control instruction between the at least two DELTA robots, and grabbing the state values of the target object under different DELTA robots based on a visual system.
6. The method for target video storage under machine vision tracking according to claim 5, wherein the controlling the encoder to perform encoding processing according to the video with different DELTA robot numbers as the target object comprises:
acquiring video image frames of a target object under different DELTA robots;
an identifier of the corresponding DELTA robot is written on each video image frame.
7. The method for storing target video under machine vision tracking according to claim 6, wherein said storing the video of the target object after the encoding process in the video memory according to the time of the target object operation comprises:
acquiring a video of a target object under each DELTA robot;
analyzing the video occurrence time of each DELTA robot;
generating a complete video file corresponding to the target object based on the video generation time;
and storing the complete video file in a video memory.
8. A robotic control system, comprising: the motion control system is connected with the vision system, the coding controller and the video memory, at least two DELTA robots are connected with the motion control system, the motion control system and the at least two DELTA robots are connected in series through an EtherCAT bus, and the motion control system and the at least two DELTA robots are connected in series through the EtherCAT bus, wherein:
the motion control system is used for controlling the target object to perform coordination operation under at least two DELTA robots;
the visual system is used for constructing a target tracking model under each DELTA robot and acquiring and processing a target object image; detecting and identifying the operating state of each DELTA robot by using an image processing method to obtain a target object in the operating process of each DELTA robot; accurately positioning the distance between the DELTA robot and the target object by using a ranging method, and predicting the state of the target object; the vision system tracks the next state value of the target object in time according to the prediction;
the encoding controller is used for carrying out encoding processing on the video taking the serial numbers of the different DELTA robots as target objects;
and the video memory is used for storing the video of the target object after the encoding processing in the video memory according to the time of the target object operation.
9. The robot control system of claim 8, wherein the vision system builds a target tracking model based on a tracking algorithm of DELTA robot working background in combination with color and edges; detecting and identifying the target object in the operating state of each DELTA robot by using an inter-frame difference method; calculating the distance between the DELTA robot and the target object by adopting a monocular vision distance measuring method; predicting the motion state of the target object state based on a Kalman filtering algorithm; and tracking the target object in real time based on the cooperative control instruction between the at least two DELTA robots, and capturing the state values of the target object under different DELTA robots based on a visual system.
10. A robot control system according to any of claims 8 or 9, wherein the encoding controller obtains video image frames of the target object under different DELTA robots; writing an identifier of a corresponding DELTA robot on each video image frame; the video memory acquires a video of a target object under each DELTA robot; analyzing the video occurrence time of each DELTA robot; generating a complete video file corresponding to the target object based on the video generation time; and storing the complete video file in a video memory.
CN201611056445.7A 2016-11-26 2016-11-26 Method and system for storing target video under machine vision tracking Active CN106982340B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611056445.7A CN106982340B (en) 2016-11-26 2016-11-26 Method and system for storing target video under machine vision tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611056445.7A CN106982340B (en) 2016-11-26 2016-11-26 Method and system for storing target video under machine vision tracking

Publications (2)

Publication Number Publication Date
CN106982340A CN106982340A (en) 2017-07-25
CN106982340B true CN106982340B (en) 2023-02-28

Family

ID=59340842

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611056445.7A Active CN106982340B (en) 2016-11-26 2016-11-26 Method and system for storing target video under machine vision tracking

Country Status (1)

Country Link
CN (1) CN106982340B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7024603B2 (en) * 2018-05-23 2022-02-24 トヨタ自動車株式会社 Data recording device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872423A (en) * 2010-05-27 2010-10-27 天津大学 Method for tracking moving object on production line
CN103826105A (en) * 2014-03-14 2014-05-28 贵州大学 Video tracking system and realizing method based on machine vision technology
CN104589357A (en) * 2014-12-01 2015-05-06 佛山市万世德机器人技术有限公司 Control system and method of DELTA robots based on visual tracking

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101872423A (en) * 2010-05-27 2010-10-27 天津大学 Method for tracking moving object on production line
CN103826105A (en) * 2014-03-14 2014-05-28 贵州大学 Video tracking system and realizing method based on machine vision technology
CN104589357A (en) * 2014-12-01 2015-05-06 佛山市万世德机器人技术有限公司 Control system and method of DELTA robots based on visual tracking

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于双目视觉的跟踪机器人系统的设计;汪倩倩;《微计算机信息》;20090822(第22期);全文 *

Also Published As

Publication number Publication date
CN106982340A (en) 2017-07-25

Similar Documents

Publication Publication Date Title
CN107194559B (en) Workflow identification method based on three-dimensional convolutional neural network
US11694432B2 (en) System and method for augmenting a visual output from a robotic device
CN112288815B (en) Target die position measurement method, system, storage medium and device
Izagirre et al. Towards manufacturing robotics accuracy degradation assessment: A vision-based data-driven implementation
CN106527239A (en) Method and system of multi-robot cooperative operation mode
CN113396423A (en) Method of processing information from event-based sensors
Xiao et al. A novel visual guidance framework for robotic welding based on binocular cooperation
WO2018235219A1 (en) Self-location estimation method, self-location estimation device, and self-location estimation program
CN106982340B (en) Method and system for storing target video under machine vision tracking
CN106393144B (en) The method and system that vision tracks under a kind of multirobot operation mode
CN116665312B (en) Man-machine cooperation method based on multi-scale graph convolution neural network
Hosseini et al. Improving the successful robotic grasp detection using convolutional neural networks
CN117359636A (en) Python-based machine vision system of inspection robot
Kumar et al. Object segmentation using independent motion detection
Dewasakti et al. Introduction to modest object detection method of Barelang-FC soccer robot
Akhloufi Pan and tilt real-time target tracking
Haris et al. Depth estimation from monocular vision using image edge complexity
Korta et al. OpenCV based vision system for industrial robot-based assembly station: calibration and testing
Taleghani et al. Robust moving object detection from a moving video camera using neural network and kalman filter
Wang et al. HFR-video-based machinery surveillance for high-speed periodic operations
CN106791604B (en) Machine vision tracks the method and system of lower target object coding
Berscheid et al. Learning a generative transition model for uncertainty-aware robotic manipulation
Stengel et al. Efficient 3d voxel reconstruction of human shape within robotic work cells
Zhang et al. Dynamic Semantics SLAM Based on Improved Mask R-CNN
Fehr et al. Issues and solutions in surveillance camera placement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant