CN111476816A - Intelligent efficient simultaneous recognition method for multiple objects - Google Patents
Intelligent efficient simultaneous recognition method for multiple objects Download PDFInfo
- Publication number
- CN111476816A CN111476816A CN201910930816.7A CN201910930816A CN111476816A CN 111476816 A CN111476816 A CN 111476816A CN 201910930816 A CN201910930816 A CN 201910930816A CN 111476816 A CN111476816 A CN 111476816A
- Authority
- CN
- China
- Prior art keywords
- image
- point set
- feature point
- coordinate
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 21
- 238000000605 extraction Methods 0.000 claims description 5
- 230000000007 visual effect Effects 0.000 claims description 4
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 239000000203 mixture Substances 0.000 claims description 3
- 238000012544 monitoring process Methods 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 230000000717 retained effect Effects 0.000 claims 1
- 238000004364 calculation method Methods 0.000 abstract description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Abstract
Compared with the prior identification technology, the method has the advantages that the target image and the environment image are independently marked after the operation according to the steps of the method, the integral comparison calculation of the integral picture is not needed, only the position change between the target image and the separated target point is calculated, the memory is saved, the CPU resource is saved, and the identification efficiency is higher.
Description
Technical Field
The invention relates to the field of object identification, in particular to an intelligent efficient simultaneous identification method for multiple objects.
Background
At present, along with the increasingly severe world safety situation, the lives of urban citizens are seriously threatened by wanted victims, escaped prisoners, stolen vehicles and the like in cities. People need to monitor these special types of targets and track and predict the moving tracks of the targets. This requires the identification of this type of object and the identification of the location of this type of object on a city map for the police to control it.
The existing identification technology judges whether to move to identify objects or not by learning a model and then comparing a plurality of frames of data with the model, and has the defects that: the data volume of a single model is large, the data volume of a multi-object model is several times of that of a single model, and CPU resources are consumed seriously. Therefore, those skilled in the art provide an intelligent method for simultaneously recognizing multiple objects with high efficiency to solve the above problems in the background art.
Disclosure of Invention
The invention aims to provide an intelligent method for simultaneously identifying multiple objects efficiently, so as to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme:
an intelligent efficient simultaneous recognition method for multiple objects comprises the following steps:
1) obtaining a picture with a fixed visual angle through monitoring equipment, obtaining an environment image, and forming an index list of all coordinate information;
2) inputting a plurality of groups of comparison images, marking the approximate outline of the target object, and separating the approximate outline from the whole picture;
3) extracting target contour information, extracting characteristic points of the obtained image, and extracting and concentrating the characteristic points of the obtained image in the index list to obtain a characteristic point set C contained in a target contour;
4) forming a background reference coordinate set B of the moving path;
5) normalizing the pixel coordinates of the image to be recognized to obtain a pixel coordinate matrix of the image according to the moved image screenshot, and comparing the feature point set C with coordinate information in the image to obtain the position of the feature point set C in the image;
6) and obtaining the moving path of the feature point set C.
As a further scheme of the invention: in the step 1), the gray image generated by the color image and the surface normal vector generated by the depth image are used as multi-data mode information, and coordinate information features in the color image, the gray image and the surface normal vector are respectively extracted through a convolution-recurrent neural network to obtain a total feature of all coordinate information in the image as a reference feature.
As a still further scheme of the invention: in the step 2), image extraction is performed by using an improved background difference method of a Gaussian mixture model and an OHTA-based color segmentation extraction technology.
As a still further scheme of the invention: in the step 6), for the feature point set C, finding out N other positioning feature points closest to the feature point set C in the coordinate set B, and comparing the position change between the feature point set C and the positioning feature points.
As a still further scheme of the invention: in the step 5), the screenshot reserves 1-2 frames of data.
As a still further scheme of the invention: in the step 4), the coordinate calibration is performed on the whole picture from which the target object is separated, so as to form a background reference coordinate set B of the moving path.
Compared with the prior art, the invention has the beneficial effects that:
compared with the original identification technology, the target image and the environment image are independently marked, the whole image does not need to be subjected to whole comparison calculation, only the position change between the target image and a separated target point needs to be calculated, the memory is saved, the CPU resource is saved, and the identification efficiency is higher.
Detailed Description
The technical solution of the present invention will be described in further detail with reference to specific embodiments.
An intelligent efficient simultaneous recognition method for multiple objects comprises the following steps:
1) obtaining a picture with a fixed visual angle through monitoring equipment, obtaining an environment image, using a gray image generated by a color image and a surface normal vector generated by a depth image as multi-data mode information, respectively extracting coordinate information characteristics in the color image, the gray image and the surface normal vector through a convolution-recursion neural network, obtaining total characteristics of all coordinate information in the image as reference characteristics, and forming an index list of all coordinate information;
2) inputting a plurality of groups of contrast images, marking the approximate outline of the target object by utilizing an improved background difference method of a Gaussian mixture model and an OHTA-based color segmentation extraction technology, and separating the approximate outline from the whole picture;
3) extracting target contour information, extracting characteristic points of the obtained image, and extracting and concentrating the characteristic points of the obtained image in the index list to obtain a characteristic point set C contained in a target contour;
4) carrying out coordinate calibration on the whole picture of the separated target object to form a background reference coordinate set B of the moving path;
5) the screenshot of the moved picture is reserved, 1-2 frame data of the screenshot is reserved, the pixel coordinate of the image to be recognized is normalized to obtain a pixel coordinate matrix of the image, and the feature point set C is compared with coordinate information in the image to obtain the position of the feature point set C in the picture;
6) and aiming at the characteristic point set C, finding out N other positioning characteristic points which are closest to the characteristic point set C in the coordinate set B, and obtaining the moving path of the characteristic point set C according to the position change between the characteristic point set C and the positioning characteristic points.
The working principle of the invention is as follows:
the picture is a fixed visual angle and a learning environment, 1-2 frame data are reserved, the data coming again is compared with the previous data, and the continuous moving object mark in the picture can be single or multiple, so that the method has the advantages that: compared with the prior art, the method saves the memory, saves the CPU resource and has higher identification efficiency.
While the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art.
Claims (6)
1. An intelligent efficient simultaneous recognition method for multiple objects is characterized by comprising the following steps:
1) obtaining a picture with a fixed visual angle through monitoring equipment, obtaining an environment image, and forming an index list of all coordinate information;
2) inputting a plurality of groups of comparison images, marking the approximate outline of the target object, and separating the approximate outline from the whole picture;
3) extracting target contour information, extracting characteristic points of the obtained image, and extracting and concentrating the characteristic points of the obtained image in the index list to obtain a characteristic point set C contained in a target contour;
4) forming a background reference coordinate set B of the moving path;
5) normalizing the pixel coordinates of the image to be recognized to obtain a pixel coordinate matrix of the image according to the moved image screenshot, and comparing the feature point set C with coordinate information in the image to obtain the position of the feature point set C in the image;
6) and obtaining the moving path of the feature point set C.
2. The method for efficiently and simultaneously identifying multiple intelligent objects according to claim 1, wherein in step 1), the gray image generated from the color image and the surface normal vector generated from the depth image are used together as multi-data mode information, and coordinate information features in the color image, the gray image and the surface normal vector are respectively extracted through a convolution-recursive neural network to obtain a total feature of all coordinate information in the image as a reference feature.
3. The method as claimed in claim 1, wherein in step 2), the image extraction is performed by using an improved background difference method of a mixture gaussian model and an OHTA-based color segmentation extraction technique.
4. The method as claimed in claim 1, wherein in step 6), for the feature point set C, N other positioning feature points closest to the feature point set C are found in the coordinate set B, and the position of the feature point set C is changed from the positioning feature points.
5. The method for efficiently and simultaneously recognizing multiple intelligent objects according to claim 1, wherein in the step 5), 1-2 frames of data are retained in the screenshot.
6. The method for efficiently and simultaneously identifying multiple intelligent objects according to claim 1, wherein in the step 4), the coordinates of the whole picture from which the target object is separated are calibrated to form a background reference coordinate set B of the moving path.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910930816.7A CN111476816A (en) | 2019-09-29 | 2019-09-29 | Intelligent efficient simultaneous recognition method for multiple objects |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910930816.7A CN111476816A (en) | 2019-09-29 | 2019-09-29 | Intelligent efficient simultaneous recognition method for multiple objects |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111476816A true CN111476816A (en) | 2020-07-31 |
Family
ID=71744970
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910930816.7A Pending CN111476816A (en) | 2019-09-29 | 2019-09-29 | Intelligent efficient simultaneous recognition method for multiple objects |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111476816A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060083421A1 (en) * | 2004-10-14 | 2006-04-20 | Wu Weiguo | Image processing apparatus and method |
CN102831401A (en) * | 2012-08-03 | 2012-12-19 | 樊晓东 | Method and system for tracking, three-dimensionally superposing and interacting target object without special mark |
CN104463108A (en) * | 2014-11-21 | 2015-03-25 | 山东大学 | Monocular real-time target recognition and pose measurement method |
CN106826815A (en) * | 2016-12-21 | 2017-06-13 | 江苏物联网研究发展中心 | Target object method of the identification with positioning based on coloured image and depth image |
CN107944459A (en) * | 2017-12-09 | 2018-04-20 | 天津大学 | A kind of RGB D object identification methods |
CN108074234A (en) * | 2017-12-22 | 2018-05-25 | 湖南源信光电科技股份有限公司 | A kind of large space flame detecting method based on target following and multiple features fusion |
CN108229458A (en) * | 2017-12-22 | 2018-06-29 | 湖南源信光电科技股份有限公司 | A kind of intelligent flame recognition methods based on motion detection and multi-feature extraction |
-
2019
- 2019-09-29 CN CN201910930816.7A patent/CN111476816A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060083421A1 (en) * | 2004-10-14 | 2006-04-20 | Wu Weiguo | Image processing apparatus and method |
CN102831401A (en) * | 2012-08-03 | 2012-12-19 | 樊晓东 | Method and system for tracking, three-dimensionally superposing and interacting target object without special mark |
CN104463108A (en) * | 2014-11-21 | 2015-03-25 | 山东大学 | Monocular real-time target recognition and pose measurement method |
CN106826815A (en) * | 2016-12-21 | 2017-06-13 | 江苏物联网研究发展中心 | Target object method of the identification with positioning based on coloured image and depth image |
CN107944459A (en) * | 2017-12-09 | 2018-04-20 | 天津大学 | A kind of RGB D object identification methods |
CN108074234A (en) * | 2017-12-22 | 2018-05-25 | 湖南源信光电科技股份有限公司 | A kind of large space flame detecting method based on target following and multiple features fusion |
CN108229458A (en) * | 2017-12-22 | 2018-06-29 | 湖南源信光电科技股份有限公司 | A kind of intelligent flame recognition methods based on motion detection and multi-feature extraction |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107729818B (en) | Multi-feature fusion vehicle re-identification method based on deep learning | |
Zhang et al. | Ripple-GAN: Lane line detection with ripple lane line detection network and Wasserstein GAN | |
WO2018072233A1 (en) | Method and system for vehicle tag detection and recognition based on selective search algorithm | |
CN112766291B (en) | Matching method for specific target object in scene image | |
CN105335702B (en) | A kind of bayonet model recognizing method based on statistical learning | |
Gomez et al. | Traffic lights detection and state estimation using hidden markov models | |
CN106127807A (en) | A kind of real-time video multiclass multi-object tracking method | |
CN104268538A (en) | Online visual inspection method for dot matrix sprayed code characters of beverage cans | |
CN107729843B (en) | Low-floor tramcar pedestrian identification method based on radar and visual information fusion | |
CN109101932B (en) | Multi-task and proximity information fusion deep learning method based on target detection | |
CN107808133A (en) | Oil-gas pipeline safety monitoring method, system and software memory based on unmanned plane line walking | |
CN103020632A (en) | Fast recognition method for positioning mark point of mobile robot in indoor environment | |
CN105160340A (en) | Vehicle brand identification system and method | |
CN110619279A (en) | Road traffic sign instance segmentation method based on tracking | |
Alvarez et al. | Hierarchical camera auto-calibration for traffic surveillance systems | |
CN103903282A (en) | Target tracking method based on LabVIEW | |
CN104978567A (en) | Vehicle detection method based on scenario classification | |
CN111008574A (en) | Key person track analysis method based on body shape recognition technology | |
CN105354533A (en) | Bag-of-word model based vehicle type identification method for unlicensed vehicle at gate | |
CN111915583A (en) | Vehicle and pedestrian detection method based on vehicle-mounted thermal infrared imager in complex scene | |
CN113077494A (en) | Road surface obstacle intelligent recognition equipment based on vehicle orbit | |
CN104200226B (en) | Particle filter method for tracking target based on machine learning | |
Liu et al. | Real-time traffic light recognition based on smartphone platforms | |
Moizumi et al. | Traffic light detection considering color saturation using in-vehicle stereo camera | |
CN111476816A (en) | Intelligent efficient simultaneous recognition method for multiple objects |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200731 |