CN111476816A - Intelligent efficient simultaneous recognition method for multiple objects - Google Patents

Intelligent efficient simultaneous recognition method for multiple objects Download PDF

Info

Publication number
CN111476816A
CN111476816A CN201910930816.7A CN201910930816A CN111476816A CN 111476816 A CN111476816 A CN 111476816A CN 201910930816 A CN201910930816 A CN 201910930816A CN 111476816 A CN111476816 A CN 111476816A
Authority
CN
China
Prior art keywords
image
point set
feature point
coordinate
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910930816.7A
Other languages
Chinese (zh)
Inventor
罗顺发
赵肖彬
崔伟勋
罗嗣达
张辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jabsco Electronic Technology Co ltd
Original Assignee
Jabsco Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jabsco Electronic Technology Co ltd filed Critical Jabsco Electronic Technology Co ltd
Priority to CN201910930816.7A priority Critical patent/CN111476816A/en
Publication of CN111476816A publication Critical patent/CN111476816A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

Compared with the prior identification technology, the method has the advantages that the target image and the environment image are independently marked after the operation according to the steps of the method, the integral comparison calculation of the integral picture is not needed, only the position change between the target image and the separated target point is calculated, the memory is saved, the CPU resource is saved, and the identification efficiency is higher.

Description

Intelligent efficient simultaneous recognition method for multiple objects
Technical Field
The invention relates to the field of object identification, in particular to an intelligent efficient simultaneous identification method for multiple objects.
Background
At present, along with the increasingly severe world safety situation, the lives of urban citizens are seriously threatened by wanted victims, escaped prisoners, stolen vehicles and the like in cities. People need to monitor these special types of targets and track and predict the moving tracks of the targets. This requires the identification of this type of object and the identification of the location of this type of object on a city map for the police to control it.
The existing identification technology judges whether to move to identify objects or not by learning a model and then comparing a plurality of frames of data with the model, and has the defects that: the data volume of a single model is large, the data volume of a multi-object model is several times of that of a single model, and CPU resources are consumed seriously. Therefore, those skilled in the art provide an intelligent method for simultaneously recognizing multiple objects with high efficiency to solve the above problems in the background art.
Disclosure of Invention
The invention aims to provide an intelligent method for simultaneously identifying multiple objects efficiently, so as to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme:
an intelligent efficient simultaneous recognition method for multiple objects comprises the following steps:
1) obtaining a picture with a fixed visual angle through monitoring equipment, obtaining an environment image, and forming an index list of all coordinate information;
2) inputting a plurality of groups of comparison images, marking the approximate outline of the target object, and separating the approximate outline from the whole picture;
3) extracting target contour information, extracting characteristic points of the obtained image, and extracting and concentrating the characteristic points of the obtained image in the index list to obtain a characteristic point set C contained in a target contour;
4) forming a background reference coordinate set B of the moving path;
5) normalizing the pixel coordinates of the image to be recognized to obtain a pixel coordinate matrix of the image according to the moved image screenshot, and comparing the feature point set C with coordinate information in the image to obtain the position of the feature point set C in the image;
6) and obtaining the moving path of the feature point set C.
As a further scheme of the invention: in the step 1), the gray image generated by the color image and the surface normal vector generated by the depth image are used as multi-data mode information, and coordinate information features in the color image, the gray image and the surface normal vector are respectively extracted through a convolution-recurrent neural network to obtain a total feature of all coordinate information in the image as a reference feature.
As a still further scheme of the invention: in the step 2), image extraction is performed by using an improved background difference method of a Gaussian mixture model and an OHTA-based color segmentation extraction technology.
As a still further scheme of the invention: in the step 6), for the feature point set C, finding out N other positioning feature points closest to the feature point set C in the coordinate set B, and comparing the position change between the feature point set C and the positioning feature points.
As a still further scheme of the invention: in the step 5), the screenshot reserves 1-2 frames of data.
As a still further scheme of the invention: in the step 4), the coordinate calibration is performed on the whole picture from which the target object is separated, so as to form a background reference coordinate set B of the moving path.
Compared with the prior art, the invention has the beneficial effects that:
compared with the original identification technology, the target image and the environment image are independently marked, the whole image does not need to be subjected to whole comparison calculation, only the position change between the target image and a separated target point needs to be calculated, the memory is saved, the CPU resource is saved, and the identification efficiency is higher.
Detailed Description
The technical solution of the present invention will be described in further detail with reference to specific embodiments.
An intelligent efficient simultaneous recognition method for multiple objects comprises the following steps:
1) obtaining a picture with a fixed visual angle through monitoring equipment, obtaining an environment image, using a gray image generated by a color image and a surface normal vector generated by a depth image as multi-data mode information, respectively extracting coordinate information characteristics in the color image, the gray image and the surface normal vector through a convolution-recursion neural network, obtaining total characteristics of all coordinate information in the image as reference characteristics, and forming an index list of all coordinate information;
2) inputting a plurality of groups of contrast images, marking the approximate outline of the target object by utilizing an improved background difference method of a Gaussian mixture model and an OHTA-based color segmentation extraction technology, and separating the approximate outline from the whole picture;
3) extracting target contour information, extracting characteristic points of the obtained image, and extracting and concentrating the characteristic points of the obtained image in the index list to obtain a characteristic point set C contained in a target contour;
4) carrying out coordinate calibration on the whole picture of the separated target object to form a background reference coordinate set B of the moving path;
5) the screenshot of the moved picture is reserved, 1-2 frame data of the screenshot is reserved, the pixel coordinate of the image to be recognized is normalized to obtain a pixel coordinate matrix of the image, and the feature point set C is compared with coordinate information in the image to obtain the position of the feature point set C in the picture;
6) and aiming at the characteristic point set C, finding out N other positioning characteristic points which are closest to the characteristic point set C in the coordinate set B, and obtaining the moving path of the characteristic point set C according to the position change between the characteristic point set C and the positioning characteristic points.
The working principle of the invention is as follows:
the picture is a fixed visual angle and a learning environment, 1-2 frame data are reserved, the data coming again is compared with the previous data, and the continuous moving object mark in the picture can be single or multiple, so that the method has the advantages that: compared with the prior art, the method saves the memory, saves the CPU resource and has higher identification efficiency.
While the preferred embodiments of the present invention have been described in detail, the present invention is not limited to the above embodiments, and various changes can be made without departing from the spirit of the present invention within the knowledge of those skilled in the art.

Claims (6)

1. An intelligent efficient simultaneous recognition method for multiple objects is characterized by comprising the following steps:
1) obtaining a picture with a fixed visual angle through monitoring equipment, obtaining an environment image, and forming an index list of all coordinate information;
2) inputting a plurality of groups of comparison images, marking the approximate outline of the target object, and separating the approximate outline from the whole picture;
3) extracting target contour information, extracting characteristic points of the obtained image, and extracting and concentrating the characteristic points of the obtained image in the index list to obtain a characteristic point set C contained in a target contour;
4) forming a background reference coordinate set B of the moving path;
5) normalizing the pixel coordinates of the image to be recognized to obtain a pixel coordinate matrix of the image according to the moved image screenshot, and comparing the feature point set C with coordinate information in the image to obtain the position of the feature point set C in the image;
6) and obtaining the moving path of the feature point set C.
2. The method for efficiently and simultaneously identifying multiple intelligent objects according to claim 1, wherein in step 1), the gray image generated from the color image and the surface normal vector generated from the depth image are used together as multi-data mode information, and coordinate information features in the color image, the gray image and the surface normal vector are respectively extracted through a convolution-recursive neural network to obtain a total feature of all coordinate information in the image as a reference feature.
3. The method as claimed in claim 1, wherein in step 2), the image extraction is performed by using an improved background difference method of a mixture gaussian model and an OHTA-based color segmentation extraction technique.
4. The method as claimed in claim 1, wherein in step 6), for the feature point set C, N other positioning feature points closest to the feature point set C are found in the coordinate set B, and the position of the feature point set C is changed from the positioning feature points.
5. The method for efficiently and simultaneously recognizing multiple intelligent objects according to claim 1, wherein in the step 5), 1-2 frames of data are retained in the screenshot.
6. The method for efficiently and simultaneously identifying multiple intelligent objects according to claim 1, wherein in the step 4), the coordinates of the whole picture from which the target object is separated are calibrated to form a background reference coordinate set B of the moving path.
CN201910930816.7A 2019-09-29 2019-09-29 Intelligent efficient simultaneous recognition method for multiple objects Pending CN111476816A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910930816.7A CN111476816A (en) 2019-09-29 2019-09-29 Intelligent efficient simultaneous recognition method for multiple objects

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910930816.7A CN111476816A (en) 2019-09-29 2019-09-29 Intelligent efficient simultaneous recognition method for multiple objects

Publications (1)

Publication Number Publication Date
CN111476816A true CN111476816A (en) 2020-07-31

Family

ID=71744970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910930816.7A Pending CN111476816A (en) 2019-09-29 2019-09-29 Intelligent efficient simultaneous recognition method for multiple objects

Country Status (1)

Country Link
CN (1) CN111476816A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060083421A1 (en) * 2004-10-14 2006-04-20 Wu Weiguo Image processing apparatus and method
CN102831401A (en) * 2012-08-03 2012-12-19 樊晓东 Method and system for tracking, three-dimensionally superposing and interacting target object without special mark
CN104463108A (en) * 2014-11-21 2015-03-25 山东大学 Monocular real-time target recognition and pose measurement method
CN106826815A (en) * 2016-12-21 2017-06-13 江苏物联网研究发展中心 Target object method of the identification with positioning based on coloured image and depth image
CN107944459A (en) * 2017-12-09 2018-04-20 天津大学 A kind of RGB D object identification methods
CN108074234A (en) * 2017-12-22 2018-05-25 湖南源信光电科技股份有限公司 A kind of large space flame detecting method based on target following and multiple features fusion
CN108229458A (en) * 2017-12-22 2018-06-29 湖南源信光电科技股份有限公司 A kind of intelligent flame recognition methods based on motion detection and multi-feature extraction

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060083421A1 (en) * 2004-10-14 2006-04-20 Wu Weiguo Image processing apparatus and method
CN102831401A (en) * 2012-08-03 2012-12-19 樊晓东 Method and system for tracking, three-dimensionally superposing and interacting target object without special mark
CN104463108A (en) * 2014-11-21 2015-03-25 山东大学 Monocular real-time target recognition and pose measurement method
CN106826815A (en) * 2016-12-21 2017-06-13 江苏物联网研究发展中心 Target object method of the identification with positioning based on coloured image and depth image
CN107944459A (en) * 2017-12-09 2018-04-20 天津大学 A kind of RGB D object identification methods
CN108074234A (en) * 2017-12-22 2018-05-25 湖南源信光电科技股份有限公司 A kind of large space flame detecting method based on target following and multiple features fusion
CN108229458A (en) * 2017-12-22 2018-06-29 湖南源信光电科技股份有限公司 A kind of intelligent flame recognition methods based on motion detection and multi-feature extraction

Similar Documents

Publication Publication Date Title
CN107729818B (en) Multi-feature fusion vehicle re-identification method based on deep learning
Zhang et al. Ripple-GAN: Lane line detection with ripple lane line detection network and Wasserstein GAN
WO2018072233A1 (en) Method and system for vehicle tag detection and recognition based on selective search algorithm
CN112766291B (en) Matching method for specific target object in scene image
CN105335702B (en) A kind of bayonet model recognizing method based on statistical learning
Gomez et al. Traffic lights detection and state estimation using hidden markov models
CN106127807A (en) A kind of real-time video multiclass multi-object tracking method
CN104268538A (en) Online visual inspection method for dot matrix sprayed code characters of beverage cans
CN107729843B (en) Low-floor tramcar pedestrian identification method based on radar and visual information fusion
CN109101932B (en) Multi-task and proximity information fusion deep learning method based on target detection
CN107808133A (en) Oil-gas pipeline safety monitoring method, system and software memory based on unmanned plane line walking
CN103020632A (en) Fast recognition method for positioning mark point of mobile robot in indoor environment
CN105160340A (en) Vehicle brand identification system and method
CN110619279A (en) Road traffic sign instance segmentation method based on tracking
Alvarez et al. Hierarchical camera auto-calibration for traffic surveillance systems
CN103903282A (en) Target tracking method based on LabVIEW
CN104978567A (en) Vehicle detection method based on scenario classification
CN111008574A (en) Key person track analysis method based on body shape recognition technology
CN105354533A (en) Bag-of-word model based vehicle type identification method for unlicensed vehicle at gate
CN111915583A (en) Vehicle and pedestrian detection method based on vehicle-mounted thermal infrared imager in complex scene
CN113077494A (en) Road surface obstacle intelligent recognition equipment based on vehicle orbit
CN104200226B (en) Particle filter method for tracking target based on machine learning
Liu et al. Real-time traffic light recognition based on smartphone platforms
Moizumi et al. Traffic light detection considering color saturation using in-vehicle stereo camera
CN111476816A (en) Intelligent efficient simultaneous recognition method for multiple objects

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200731