CN109493369B - Intelligent robot vision dynamic positioning and tracking method and system - Google Patents
Intelligent robot vision dynamic positioning and tracking method and system Download PDFInfo
- Publication number
- CN109493369B CN109493369B CN201811058413.XA CN201811058413A CN109493369B CN 109493369 B CN109493369 B CN 109493369B CN 201811058413 A CN201811058413 A CN 201811058413A CN 109493369 B CN109493369 B CN 109493369B
- Authority
- CN
- China
- Prior art keywords
- target
- tracking
- positioning
- image
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000004519 manufacturing process Methods 0.000 claims abstract description 35
- 238000010586 diagram Methods 0.000 claims description 8
- 238000005259 measurement Methods 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 238000001914 filtration Methods 0.000 claims description 6
- 230000004927 fusion Effects 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 6
- 239000003814 drug Substances 0.000 abstract description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000009776 industrial production Methods 0.000 description 2
- 229940079593 drug Drugs 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Abstract
The invention discloses an intelligent robot vision dynamic positioning and tracking method, which comprises the following steps: s10, extracting characteristic information; s20, video acquisition; s30, video processing, S40 and object recognition; s50, a fine identification step, S60 and a target positioning and tracking step. The invention can realize the classification and identification of various products and the sorting of unqualified products in the intelligent production process, can effectively improve the production efficiency and reduce the production cost, and has important significance for the intelligent and flexible production of industries such as food, medicine and the like.
Description
Technical Field
The invention relates to a positioning and tracking method, in particular to a visual dynamic positioning and tracking method and system for an intelligent robot.
Background
Computer vision is a science for researching how to make a machine "see", and further, it means that a camera and a computer are used to replace human eyes to perform machine vision such as identification, tracking and measurement on a target, and further image processing is performed, so that the computer processing becomes an image more suitable for human eyes to observe or transmitted to an instrument to detect. The existing vision-based positioning method mainly comprises two types, namely two-dimensional vision positioning and three-dimensional modeling positioning. The two-dimensional visual positioning method is to perform visual calibration on a controlled object and the surrounding environment by using a monocular and then perform accurate operation through calibrated coordinates. The three-dimensional modeling positioning mainly uses more than 2 cameras to shoot the target, and the shot images are fused, so that the three-dimensional coordinate of the target is established, the three-dimensional environment is simulated, and the accurate operation of the target is realized.
Industrial robots are widely used in industrial production, and can complete many instructions under the guidance operation of workers, but the robots do not have the ability of sensing external information and cannot adjust changed working environments, so that the quality and precision of industrial production objects are seriously affected. Therefore, a computer vision technology is introduced in the production of the industrial machine, the operation precision of the industrial robot can be improved, the real-time tracking and deviation rectifying functions can be realized, the real-time requirement on the robot in the production process can be met, and the industrial robot can better adapt to the complex field environment.
At present, many domestic enterprises can realize intelligent production, but the aspects of product classification identification, unqualified product selection and the like have many defects, so that the intelligent production of the products is limited.
Disclosure of Invention
In order to overcome the defects and problems in the prior art, the invention provides the intelligent robot vision dynamic positioning and tracking method and system.
The invention is realized by the following technical scheme: an intelligent robot vision dynamic positioning and tracking method, comprising the following steps:
s10, a characteristic information extraction step, namely classifying products, carrying out image acquisition on different products in a multi-azimuth and multi-scene mode, carrying out block diagram and denoising processing on a target object, extracting characteristic information of the target product from the target object, and establishing a training set;
s20, a video acquisition step, namely acquiring production video streams and/or pictures of the product in the production process by using an image sensor;
s30, a video processing step, namely preprocessing the production video stream and/or the picture to generate a serialized image convenient to process, and simultaneously, calibrating a training set;
s40, an object recognition step, namely processing the serialized images according to a pre-established product model, and calibrating the products in the images;
s50, a fine identification step, namely classifying the products after the calibration treatment, finely identifying similar targets, and establishing a video inter-frame relation by measuring the similarity between the current image and the previous frame image target to realize target tracking;
and S60, a target positioning and tracking step, namely positioning and tracking the image subjected to the fine identification step in the video.
Preferably, in the target positioning and tracking step, the target position is predicted according to the previous frame of image, and the target position is detected according to the current image; and then, adopting a fusion measurement mode, and performing cascade matching by adopting a Hungarian algorithm according to the distance between the target predicted position and the detected position in the Mahalanobis space and the cosine distance of the expression characteristics between the boundary areas, thereby finally realizing positioning tracking.
Preferably, in the target positioning and tracking step, a target position is predicted by using kalman filtering, and the target position is detected by using a target detection algorithm.
According to the inventive concept of the above method, the invention also provides an intelligent robot vision dynamic positioning and tracking system, which comprises:
the characteristic information extraction module is used for classifying products, carrying out image acquisition on different products in a multi-azimuth and multi-scene mode, carrying out block diagram and denoising processing on a target object, extracting characteristic information of the target product from the target object, and establishing a training set;
the video acquisition module is used for acquiring production video streams and/or pictures of products in the production process by using the image sensor;
the video processing module is used for preprocessing the production video stream and/or the picture to generate a serialized image which is convenient to process, and meanwhile, the video processing module is also used for calibrating the training set process;
the object identification module is used for processing the serialized images according to a pre-established product model and calibrating products in the images;
the fine identification module is used for classifying the products after the calibration processing, finely identifying similar targets, and establishing a video inter-frame relation by measuring the similarity between the current image and the previous frame image target so as to realize target tracking;
and the target positioning and tracking module is used for positioning and tracking the image processed by the refined identification step in the video.
Preferably, in the target positioning and tracking module, the target position is predicted according to the previous frame of image, and the target position is detected according to the current image; and then, adopting a fusion measurement mode, and performing cascade matching by adopting a Hungarian algorithm according to the distance between the target predicted position and the detected position in the Mahalanobis space and the cosine distance of the expression characteristics between the boundary areas, thereby finally realizing positioning tracking.
Preferably, in the target positioning and tracking module, a target position is predicted by using kalman filtering, and the target position is detected by using a target detection algorithm.
The invention can realize the classification and identification of various products and the sorting of unqualified products in the intelligent production process, can effectively improve the production efficiency and reduce the production cost, and has important significance for the intelligent and flexible production of industries such as food, medicine and the like.
Drawings
FIG. 1 is a schematic flow chart of a method according to an embodiment of the present invention.
Fig. 2 is a schematic block diagram of a system according to an embodiment of the present invention.
Fig. 3 is a schematic process diagram of the target location and tracking module according to the embodiment of the present invention for implementing target location and tracking.
Detailed Description
To facilitate understanding of those skilled in the art, the present invention is described in further detail below with reference to the accompanying drawings and specific embodiments.
As shown in fig. 1, an intelligent robot vision dynamic positioning and tracking method includes the following steps:
s10, a characteristic information extraction step, namely classifying products (manual classification can be adopted), carrying out image acquisition on different products in a multi-direction and multi-scene mode, carrying out block diagram and denoising processing on a target object, extracting characteristic information of the target product from the target object, and establishing a training set; in this embodiment, the feature information includes feature information such as shape information and appearance profile information of the target product;
s20, a video acquisition step, namely acquiring production video streams and/or pictures of the product in the production process by using an image sensor;
s30, a video processing step, namely preprocessing the production video stream and/or the picture to generate a serialized image convenient to process, and simultaneously, calibrating a training set;
s40, an object recognition step, namely processing the serialized images according to a pre-established product model, and calibrating the products in the images;
s50, a fine identification step, namely classifying the products after the calibration treatment, finely identifying similar targets, and establishing a video inter-frame relation by measuring the similarity between the current image and the previous frame image target to realize target tracking;
and S60, a target positioning and tracking step, namely positioning and tracking the image subjected to the fine identification step in the video.
In the target positioning and tracking step, the target position is predicted according to the previous frame image, and the target position is detected according to the current frame image; and then, adopting a fusion measurement mode, and performing cascade matching by adopting a Hungarian algorithm according to the distance between the target predicted position and the detected position in the Mahalanobis space and the cosine distance of the expression characteristics between the boundary areas, thereby finally realizing positioning tracking. In this embodiment, in the step of positioning and tracking the target, a target position is predicted by using kalman filtering, and the target position is detected by using a target detection algorithm.
According to the inventive concept of the above method, an embodiment of the present invention further provides an intelligent robot vision dynamic positioning and tracking system, as shown in fig. 2, which includes:
the characteristic information extraction module is used for classifying products, carrying out image acquisition on different products in a multi-azimuth and multi-scene mode, carrying out block diagram and denoising processing on a target object, extracting characteristic information of the target product from the target object, and establishing a training set; the characteristic information comprises characteristic information such as shape information and appearance contour information of the target product;
the video acquisition module is used for acquiring production video streams and/or pictures of products in the production process by using the image sensor;
the video processing module is used for preprocessing the production video stream and/or the picture to generate a serialized image which is convenient to process, and meanwhile, the video processing module is also used for calibrating the training set process;
the object identification module is used for processing the serialized images according to a pre-established product model and calibrating products in the images;
the fine identification module is used for classifying the products after the calibration processing, finely identifying similar targets, and establishing a video inter-frame relation by measuring the similarity between the current image and the previous frame image target so as to realize target tracking;
and the target positioning and tracking module is used for positioning and tracking the image processed by the refined identification step in the video.
In one preferred embodiment, the target positioning and tracking module predicts the target position according to the previous frame image and detects the target position according to the current image; and then, adopting a fusion measurement mode, and performing cascade matching by adopting a Hungarian algorithm according to the distance between the target predicted position and the detected position in the Mahalanobis space and the cosine distance of the expression characteristics between the boundary areas, thereby finally realizing positioning tracking. In this embodiment, in the target location and tracking module, a target position is predicted by using kalman filtering, and a target position is detected by using a target detection algorithm. The brief process of the target location and tracking module in this embodiment to realize target location and tracking is shown in fig. 3.
In a preferred embodiment, the video capture module includes a CCD camera with a video capture function or an imaging sensor with a similar function, and the video capture module and the video processing module have a data transmission interface, so that data transmission can be performed between the video capture module and the video processing module. The video acquisition module can acquire various objects on an intelligent production line from a CCD camera of intelligent robot peripheral equipment in a wired or wireless mode to dynamically shoot or pick up a picture, and transmits the various objects to the video processing module through a TCP/IP protocol to be stored and processed, the video processing module preprocesses real-time video, effective video or image information after initialization can be intercepted, clear and effective serialized images are generated through decoding, and meanwhile, a training set is calibrated.
By utilizing the technical scheme provided by the invention, the classification and identification of various products and the sorting of unqualified products can be realized in the intelligent production process, the production efficiency can be effectively improved, the production cost can be effectively reduced, and the method has important significance for the intelligent and flexible production of industries such as food, medicines and the like.
The above embodiments are preferred implementations of the present invention, and are not intended to limit the present invention, and any obvious alternative is within the scope of the present invention without departing from the inventive concept thereof.
Claims (8)
1. An intelligent robot vision dynamic positioning and tracking method is characterized by comprising the following steps:
s10, a characteristic information extraction step, namely classifying products, carrying out image acquisition on different products in a multi-azimuth and multi-scene mode, carrying out block diagram and denoising processing on a target object, extracting characteristic information of the target product from the target object, and establishing a training set;
s20, a video acquisition step, namely acquiring production video streams and/or pictures of the product in the production process by using an image sensor;
s30, a video processing step, namely preprocessing the production video stream and/or the picture to generate a serialized image convenient to process, and simultaneously, calibrating a training set;
s40, an object recognition step, namely processing the serialized images according to a pre-established product model, and calibrating the products in the images;
s50, a fine identification step, namely classifying the products after the calibration treatment, finely identifying similar targets, and establishing a video inter-frame relation by measuring the similarity between the current image and the previous frame image target to realize target tracking;
and S60, a target positioning and tracking step, namely positioning and tracking the image subjected to the fine identification step in the video.
2. The method of claim 1, wherein: in the step of target positioning and tracking, the target position is predicted according to the previous frame image, and the target position is detected according to the current image; and then, adopting a fusion measurement mode, and performing cascade matching by adopting a Hungarian algorithm according to the distance between the target predicted position and the detected position in the Mahalanobis space and the cosine distance of the expression characteristics between the boundary areas, thereby finally realizing positioning tracking.
3. The method of claim 2, wherein: in the step of target positioning and tracking, a target position is predicted by using Kalman filtering, and the target position is detected by using a target detection algorithm.
4. The method according to any one of claims 1 to 3, wherein: the characteristic information includes shape information and appearance profile information of the target product.
5. An intelligent robot vision dynamic positioning and tracking system is characterized in that the system comprises:
the characteristic information extraction module is used for classifying products, carrying out image acquisition on different products in a multi-azimuth and multi-scene mode, carrying out block diagram and denoising processing on a target object, extracting characteristic information of the target product from the target object, and establishing a training set;
the video acquisition module is used for acquiring production video streams and/or pictures of products in the production process by using the image sensor;
the video processing module is used for preprocessing the production video stream and/or the picture to generate a serialized image which is convenient to process, and meanwhile, the video processing module is also used for calibrating the training set process;
the object identification module is used for processing the serialized images according to a pre-established product model and calibrating products in the images;
the fine identification module is used for classifying the products after the calibration processing, finely identifying similar targets, and establishing a video inter-frame relation by measuring the similarity between the current image and the previous frame image target so as to realize target tracking;
and the target positioning and tracking module is used for positioning and tracking the image processed by the refined identification step in the video.
6. The system of claim 5, wherein: in the target positioning and tracking module, the target position is predicted according to the previous frame of image, and the target position is detected according to the current image; and then, adopting a fusion measurement mode, and performing cascade matching by adopting a Hungarian algorithm according to the distance between the target predicted position and the detected position in the Mahalanobis space and the cosine distance of the expression characteristics between the boundary areas, thereby finally realizing positioning tracking.
7. The system of claim 6, wherein: in the target positioning and tracking module, a target position is predicted by using Kalman filtering, and the target position is detected by using a target detection algorithm.
8. The system according to any one of claims 5 to 7, wherein: the characteristic information includes shape information and appearance profile information of the target product.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811058413.XA CN109493369B (en) | 2018-09-11 | 2018-09-11 | Intelligent robot vision dynamic positioning and tracking method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811058413.XA CN109493369B (en) | 2018-09-11 | 2018-09-11 | Intelligent robot vision dynamic positioning and tracking method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109493369A CN109493369A (en) | 2019-03-19 |
CN109493369B true CN109493369B (en) | 2020-12-29 |
Family
ID=65689594
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811058413.XA Active CN109493369B (en) | 2018-09-11 | 2018-09-11 | Intelligent robot vision dynamic positioning and tracking method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109493369B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110154036B (en) * | 2019-06-24 | 2020-10-13 | 山东大学 | Design method and system of indoor service robot controller under visual dynamic system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108171748A (en) * | 2018-01-23 | 2018-06-15 | 哈工大机器人(合肥)国际创新研究院 | A kind of visual identity of object manipulator intelligent grabbing application and localization method |
CN108363997A (en) * | 2018-03-20 | 2018-08-03 | 南京云思创智信息科技有限公司 | It is a kind of in video to the method for real time tracking of particular person |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7564455B2 (en) * | 2002-09-26 | 2009-07-21 | The United States Of America As Represented By The Secretary Of The Navy | Global visualization process for personal computer platforms (GVP+) |
CN100568262C (en) * | 2007-12-29 | 2009-12-09 | 浙江工业大学 | Human face recognition detection device based on the multi-video camera information fusion |
CN101576956B (en) * | 2009-05-11 | 2011-08-31 | 天津普达软件技术有限公司 | On-line character detection method based on machine vision and system thereof |
CN107169519B (en) * | 2017-05-18 | 2018-05-01 | 重庆卓来科技有限责任公司 | A kind of industrial robot vision's system and its teaching method |
CN107516127B (en) * | 2017-08-21 | 2020-06-30 | 山东大学 | Method and system for service robot to autonomously acquire attribution semantics of human-worn carried articles |
-
2018
- 2018-09-11 CN CN201811058413.XA patent/CN109493369B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108171748A (en) * | 2018-01-23 | 2018-06-15 | 哈工大机器人(合肥)国际创新研究院 | A kind of visual identity of object manipulator intelligent grabbing application and localization method |
CN108363997A (en) * | 2018-03-20 | 2018-08-03 | 南京云思创智信息科技有限公司 | It is a kind of in video to the method for real time tracking of particular person |
Also Published As
Publication number | Publication date |
---|---|
CN109493369A (en) | 2019-03-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109255813B (en) | Man-machine cooperation oriented hand-held object pose real-time detection method | |
CN109308693B (en) | Single-binocular vision system for target detection and pose measurement constructed by one PTZ camera | |
CN111791239B (en) | Method for realizing accurate grabbing by combining three-dimensional visual recognition | |
CN109887040B (en) | Moving target active sensing method and system for video monitoring | |
JP5612916B2 (en) | Position / orientation measuring apparatus, processing method thereof, program, robot system | |
CN110555889A (en) | CALTag and point cloud information-based depth camera hand-eye calibration method | |
CN107992881A (en) | A kind of Robotic Dynamic grasping means and system | |
CN111476841B (en) | Point cloud and image-based identification and positioning method and system | |
JP2011198349A (en) | Method and apparatus for processing information | |
CN112419429B (en) | Large-scale workpiece surface defect detection calibration method based on multiple viewing angles | |
Momeni-k et al. | Height estimation from a single camera view | |
Hsu et al. | Development of a faster classification system for metal parts using machine vision under different lighting environments | |
CN109035214A (en) | A kind of industrial robot material shapes recognition methods | |
CN108582075A (en) | A kind of intelligent robot vision automation grasping system | |
CN114029946A (en) | Method, device and equipment for guiding robot to position and grab based on 3D grating | |
CN113822810A (en) | Method for positioning workpiece in three-dimensional space based on machine vision | |
CN109493369B (en) | Intelligent robot vision dynamic positioning and tracking method and system | |
Ali et al. | Camera based precision measurement in improving measurement accuracy | |
Hadi et al. | Fusion of thermal and depth images for occlusion handling for human detection from mobile robot | |
Luo et al. | Vision-based 3-D object pick-and-place tasks of industrial manipulator | |
Kheng et al. | Stereo vision with 3D coordinates for robot arm application guide | |
CN107020545A (en) | The apparatus and method for recognizing mechanical workpieces pose | |
Peng et al. | Real time and robust 6D pose estimation of RGBD data for robotic bin picking | |
CN206912816U (en) | Identify the device of mechanical workpieces pose | |
CN113822946A (en) | Mechanical arm grabbing method based on computer vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20210513 Address after: Room 215-6, No.10, Siwei Road, Dongli Economic and Technological Development Zone, Dongli District, Tianjin Patentee after: Tianjin tieshe Intelligent Technology Co.,Ltd. Address before: 8j, block B, Konka R & D building, 28 Keji South 12th Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000 Patentee before: SHENZHEN KONGSHI INTELLIGENCE SYSTEM Co.,Ltd. |
|
TR01 | Transfer of patent right |