CN111460977A - Cross-vision person re-identification method, device, terminal and storage medium - Google Patents
Cross-vision person re-identification method, device, terminal and storage medium Download PDFInfo
- Publication number
- CN111460977A CN111460977A CN202010237294.5A CN202010237294A CN111460977A CN 111460977 A CN111460977 A CN 111460977A CN 202010237294 A CN202010237294 A CN 202010237294A CN 111460977 A CN111460977 A CN 111460977A
- Authority
- CN
- China
- Prior art keywords
- monitored
- target
- identification
- image
- comparison
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 43
- 238000012544 monitoring process Methods 0.000 claims description 94
- 238000013507 mapping Methods 0.000 claims description 7
- 238000013136 deep learning model Methods 0.000 claims description 6
- 238000000605 extraction Methods 0.000 claims description 6
- 238000012549 training Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000010276 construction Methods 0.000 claims description 3
- 238000005516 engineering process Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 5
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000007613 environmental effect Effects 0.000 description 2
- 230000036544 posture Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The application provides a cross-vision-field person re-identification method, a cross-vision-field person re-identification device, a cross-vision-field person re-identification terminal and a storage medium.
Description
Technical Field
The present application relates to the field of video surveillance technologies, and in particular, to a method, an apparatus, a terminal, and a storage medium for cross-visual-area person re-identification.
Background
The cross-view Person Re-Identification (ReID), which is called ReID for short, is a process for Re-identifying persons and establishing a corresponding relationship between pedestrian images shot by different cameras without overlapping and covering views. When the shooting ranges of the cameras are not overlapped, the searching difficulty is increased greatly due to the fact that continuous information does not exist.
At present, the application of cross-visual person re-identification in video monitoring is more and more extensive due to the maturity of image identification technology, but when the real complex monitoring environment is faced, the identification error rate is still high.
Disclosure of Invention
The application provides a cross-vision-area person re-identification method, a cross-vision-area person re-identification device, a terminal and a storage medium, which are used for solving the technical problem that the identification error rate of the existing cross-vision-area person re-identification technology is high.
The application provides a cross-vision person re-identification method in a first aspect, which comprises the following steps:
acquiring longitude and latitude coordinates of a target to be monitored, and determining a moving path of the target to be monitored according to the longitude and latitude coordinates;
determining a monitoring area where the target to be monitored is located currently and a previous monitoring area on the way according to the longitude and latitude coordinates and the moving path, and acquiring an identification image and a comparison image of the target to be monitored, wherein the identification image is a monitoring image of the target to be monitored in the current monitoring area, and the comparison image is a monitoring image of the target to be monitored in the previous monitoring area;
respectively extracting the features of the target to be monitored in the identification image and the comparison image through a preset feature identification model to obtain a feature vector to be identified and a comparison feature vector;
and comparing the similarity of the feature vector to be identified and the comparison feature vector to obtain a cross-vision re-identification result of the target to be monitored.
Optionally, the method further comprises:
and acquiring a worker characteristic sample data set, and inputting the worker characteristic sample data set into a preset initial deep learning model for training to obtain a characteristic identification model.
Optionally, the method further comprises:
and determining the positions of the target to be monitored in the identification image and the comparison image according to a preset mapping relation between each monitoring lens coordinate and the longitude and latitude coordinate, wherein the monitoring lens coordinate is a coordinate value of a monitoring camera in the monitoring area under a lens coordinate system.
Optionally, the comparing the similarity between the feature vector to be identified and the comparison feature vector to obtain the cross-visual-area re-identification result of the target to be monitored specifically includes:
and comparing the similarity of the feature vector to be identified and the comparison feature vector in a cosine similarity comparison mode to obtain a similarity score, and comparing the similarity score with a preset similarity threshold to obtain a cross-vision re-identification result of the target to be monitored.
A second aspect of the present application provides a cross-vision person re-identification device comprising:
the longitude and latitude coordinate processing unit is used for acquiring longitude and latitude coordinates of the target to be monitored and determining a moving path of the target to be monitored according to the longitude and latitude coordinates;
a monitoring image obtaining unit, configured to determine, according to the longitude and latitude coordinates and the moving path, a current monitoring area where the target to be monitored is located and a previous monitoring area on which the target passes, and obtain an identification image and a comparison image of the target to be monitored, where the identification image is a monitoring image of the target to be monitored in the current monitoring area, and the comparison image is a monitoring image of the target to be monitored in the previous monitoring area;
the feature extraction unit is used for respectively extracting features of the target to be monitored in the identification image and the comparison image through a preset feature identification model to obtain a feature vector to be identified and a comparison feature vector;
and the feature comparison unit is used for comparing the similarity of the feature vector to be identified and the comparison feature vector to obtain a cross-vision re-identification result of the target to be monitored.
Optionally, the method further comprises:
and the characteristic identification model construction unit is used for acquiring a worker characteristic sample data set, inputting the worker characteristic sample data set into a preset initial deep learning model for training to obtain a characteristic identification model.
Optionally, the method further comprises:
and the coordinate conversion unit is used for determining the positions of the target to be monitored in the identification image and the comparison image according to the preset mapping relation between each monitoring lens coordinate and the longitude and latitude coordinate, wherein the monitoring lens coordinate is a coordinate value under a lens coordinate system of a monitoring camera in the monitoring area.
Optionally, the feature matching unit is specifically configured to:
and comparing the similarity of the feature vector to be identified and the comparison feature vector in a cosine similarity comparison mode to obtain a similarity score, and comparing the similarity score with a preset similarity threshold to obtain a cross-vision re-identification result of the target to be monitored.
A third aspect of the present application provides a terminal, comprising: a memory and a processor;
the memory is configured to store program code corresponding to the cross-field person re-identification method of the first aspect of the present application;
the processor is configured to execute the program code.
A fourth aspect of the present application provides a storage medium having stored therein program code corresponding to the cross-visual-area person re-identification method according to the first aspect of the present application.
According to the technical scheme, the embodiment of the application has the following advantages:
the application provides a cross-vision field personnel re-identification method, which comprises the following steps: acquiring longitude and latitude coordinates of a target to be monitored, and determining a moving path of the target to be monitored according to the longitude and latitude coordinates; determining a monitoring area where the target to be monitored is located currently and a previous monitoring area on the way according to the longitude and latitude coordinates and the moving path, and acquiring an identification image and a comparison image of the target to be monitored, wherein the identification image is a monitoring image of the target to be monitored in the current monitoring area, and the comparison image is a monitoring image of the target to be monitored in the previous monitoring area; respectively extracting the features of the target to be monitored in the identification image and the comparison image through a preset feature identification model to obtain a feature vector to be identified and a comparison feature vector; and comparing the similarity of the feature vector to be identified and the comparison feature vector to obtain a cross-vision re-identification result of the target to be monitored.
The method and the device for identifying the target to be monitored in the different monitoring images based on the corresponding relation among the longitude and latitude coordinates, the longitude and latitude coordinates and the monitoring area of the target to be monitored identify the target to be monitored in different monitoring images through the real-time longitude and latitude coordinates and the historical moving path, and solve the technical problem that when the target to be monitored is identified again in an image identification mode in the prior art, the target to be monitored is easily influenced by environmental factors, and the identification error rate is high.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a schematic flow chart diagram illustrating a first embodiment of a cross-vision person re-identification method provided herein;
FIG. 2 is a schematic flow chart diagram illustrating a second embodiment of a cross-vision person re-identification method provided herein;
fig. 3 is a schematic structural diagram of a first embodiment of a cross-vision person re-identification device provided by the present application.
Detailed Description
In recent years, video image recognition technology based on deep learning has advanced greatly, and application of video monitoring technology in various industries is promoted. However, in practical applications, due to some drastic appearance change factors, such as illumination changes, view angle changes of different cameras, shading blur, similar dressing, walking postures, and the like, the image similarity of the same object under different vision fields is greatly reduced, which causes a technical problem that when an existing person crossing the vision fields re-identifies the object facing a real and complex monitoring environment, the identification error rate is high.
The embodiment of the application provides a cross-vision-area person re-identification method, a cross-vision-area person re-identification device, a cross-vision-area person re-identification terminal and a storage medium, which are used for solving the technical problem that the existing cross-vision-area person re-identification technology is high in identification error rate.
In order to make the objects, features and advantages of the present invention more apparent and understandable, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the embodiments described below are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, a first embodiment of the present application provides a cross-vision person re-identification method, including:
It should be noted that the longitude and latitude coordinates and the moving path of the target to be monitored are obtained by the satellite positioning device mounted on the target to be monitored.
The identification image is a monitoring image of the target to be monitored in the current monitoring area, and the comparison image is a monitoring image of the target to be monitored in the previous monitoring area.
It should be noted that, according to the longitude and latitude coordinates, the monitoring area where the target to be monitored is currently located is determined, the identification image of the target to be monitored is acquired, and according to the moving path, the previous monitoring area before the target to be monitored enters the monitoring area where the target to be monitored is currently located is determined, and the comparison image of the target to be monitored is acquired.
And 103, respectively extracting the features of the target to be monitored in the identification image and the comparison image through a preset feature identification model to obtain a feature vector to be identified and a comparison feature vector.
It should be noted that, through a pre-trained feature recognition model, image feature extraction is respectively performed on the target to be monitored in the recognition image and the comparison image, so as to obtain a feature vector to be recognized and a comparison feature vector.
And step 104, comparing the similarity of the feature vector to be identified and the comparison feature vector to obtain a cross-vision re-identification result of the target to be monitored.
It should be noted that, next, the feature similarity comparison is performed on the feature vector to be identified and the comparison feature vector obtained in step 103, so as to obtain a cross-view re-identification result of the target to be monitored through the comparison result.
The method and the device for identifying the target to be monitored in the different monitoring images based on the longitude and latitude coordinates of the target to be monitored and the corresponding relation between the longitude and latitude coordinates and the monitoring area recognize the target to be monitored in different monitoring images through the real-time longitude and latitude coordinates and the historical moving path, and solve the technical problem that when the target to be monitored is re-recognized in an image recognition mode in the prior art, the target to be monitored is easily influenced by environmental factors, and the recognition error rate is high.
The above is a detailed description of a first embodiment of a cross-sight person re-identification method provided by the present application, and the following is a detailed description of a second embodiment of the cross-sight person re-identification method provided by the present application.
Referring to fig. 2, a second embodiment of the present application provides a cross-vision person re-identification method, including:
200, acquiring a worker feature sample data set, and inputting the worker feature sample data set into a preset initial deep learning model for training to obtain a feature recognition model.
It should be noted that, in order to construct a staff data set, a dome camera is used to collect images of substation staff at different angles and in multiple postures in multiple non-overlapping regions, image enhancement operations such as horizontal flipping, random noise enhancement addition and the like are performed, the images are uniformly scaled to the same scale, for example, 128 × 256 pixels, and finally, manual labeling is performed on the images, and each staff is assigned with a unique category number.
And then, building an SE-ResNet50 deep learning network, and performing model training by using the built worker data set to obtain a feature recognition model.
It should be noted that the longitude and latitude coordinates and the moving path of the target to be monitored are obtained by the satellite positioning device mounted on the target to be monitored.
The identification image is a monitoring image of the target to be monitored in the current monitoring area, and the comparison image is a monitoring image of the target to be monitored in the previous monitoring area.
It should be noted that, according to the longitude and latitude coordinates, the monitoring area where the target to be monitored is currently located is determined, the identification image of the target to be monitored is acquired, and according to the moving path, the previous monitoring area before the target to be monitored enters the monitoring area where the target to be monitored is currently located is determined, and the comparison image of the target to be monitored is acquired.
And 203, determining the positions of the target to be monitored in the identification image and the comparison image according to the preset mapping relation between the coordinates of each monitoring lens and the longitude and latitude coordinates.
And the monitoring lens coordinate is a coordinate value under a lens coordinate system of the monitoring camera in the monitoring area.
It should be noted that, according to the preset mapping relationship between each monitoring lens coordinate and the longitude and latitude coordinate, the longitude and latitude coordinate is converted into a coordinate value in the monitoring image, and the position of the target to be monitored in the identification image and the comparison image is determined.
And 204, respectively extracting the features of the target to be monitored in the identification image and the comparison image through a preset feature identification model to obtain a feature vector to be identified and a comparison feature vector.
It should be noted that, through a pre-trained feature recognition model, image feature extraction is respectively performed on the target to be monitored in the recognition image and the comparison image, so as to obtain a feature vector to be recognized and a comparison feature vector.
And step 205, comparing the similarity of the feature vector to be identified and the comparison feature vector to obtain a cross-visual-field re-identification result of the target to be monitored.
It should be noted that, next, the feature similarity comparison is performed on the feature vector to be identified and the comparison feature vector obtained in step 204, so as to obtain a cross-view re-identification result of the target to be monitored through the comparison result.
And calculating the similarity of the two eigenvectors by using cosine similarity, and considering the same worker when the maximum similarity is greater than a threshold value of 0.8.
Let the eigenvectors of the staff in two different monitoring areas be A, B respectively, wherein the dimensionality of A, B is d, and the cosine similarity formula for calculating the two eigenvectors is as follows:
in the formula, d is the individual number of the staff in the collected data set.
The above is a detailed description of the second embodiment of the cross-sight person re-identification method provided by the present application, and the following is a detailed description of the first embodiment of the cross-sight person re-identification device provided by the present application.
Referring to fig. 3, a third embodiment of the present application provides a cross-vision person re-identification device, comprising:
the latitude and longitude coordinate processing unit 301 is configured to acquire latitude and longitude coordinates of the target to be monitored, and determine a moving path of the target to be monitored according to the latitude and longitude coordinates;
a monitoring image obtaining unit 302, configured to determine, according to the longitude and latitude coordinates and the moving path, a current monitoring area where the target to be monitored is located and a previous monitoring area on the way, and obtain an identification image and a comparison image of the target to be monitored, where the identification image is a monitoring image of the target to be monitored in the current monitoring area, and the comparison image is a monitoring image of the target to be monitored in the previous monitoring area;
the feature extraction unit 303 is configured to perform feature extraction on the target to be monitored in the identification image and the comparison image respectively through a preset feature identification model to obtain a feature vector to be identified and a comparison feature vector;
the feature comparison unit 304 is configured to compare the similarity between the feature vector to be identified and the comparison feature vector to obtain a cross-view re-identification result of the target to be monitored.
Optionally, the method further comprises:
the feature recognition model construction unit 300 is configured to obtain a worker feature sample data set, input the worker feature sample data set to a preset initial deep learning model, and train the worker feature sample data set to obtain a feature recognition model.
Optionally, the method further comprises:
and a coordinate conversion unit 305, configured to determine positions of the target to be monitored in the identification image and the comparison image according to a mapping relationship between preset coordinates of each monitoring lens and longitude and latitude coordinates, where the coordinates of the monitoring lens are coordinate values in a lens coordinate system of a monitoring camera in the monitoring area.
Optionally, the feature comparing unit 304 is specifically configured to:
and comparing the similarity of the characteristic vector to be identified and the comparison characteristic vector in a cosine similarity comparison mode to obtain a similarity score, and comparing the similarity score with a preset similarity threshold to obtain a cross-vision re-identification result of the target to be monitored.
The above is a detailed description of a first embodiment of a cross-vision person re-identification apparatus provided by the present application, and the following is a detailed description of embodiments of a terminal and a storage medium provided by the present application.
A fourth embodiment of the present application provides a terminal, including: a memory and a processor;
the memory is used for storing program codes corresponding to the cross-vision person re-identification method in the first embodiment and the second embodiment of the application;
the processor is configured to execute the program code.
A fifth embodiment of the present application provides a storage medium having stored therein program codes corresponding to the cross-visual-area person re-identification method described in the first and second embodiments of the present application.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The terms "first," "second," "third," "fourth," and the like in the description of the application and the above-described figures, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.
Claims (10)
1. A cross-vision person re-identification method, comprising:
acquiring longitude and latitude coordinates of a target to be monitored, and determining a moving path of the target to be monitored according to the longitude and latitude coordinates;
determining a monitoring area where the target to be monitored is located currently and a previous monitoring area on the way according to the longitude and latitude coordinates and the moving path, and acquiring an identification image and a comparison image of the target to be monitored, wherein the identification image is a monitoring image of the target to be monitored in the current monitoring area, and the comparison image is a monitoring image of the target to be monitored in the previous monitoring area;
respectively extracting the features of the target to be monitored in the identification image and the comparison image through a preset feature identification model to obtain a feature vector to be identified and a comparison feature vector;
and comparing the similarity of the feature vector to be identified and the comparison feature vector to obtain a cross-vision re-identification result of the target to be monitored.
2. The cross-vision person re-identification method of claim 1, further comprising:
and acquiring a worker characteristic sample data set, and inputting the worker characteristic sample data set into a preset initial deep learning model for training to obtain a characteristic identification model.
3. The cross-vision person re-identification method of claim 1, further comprising:
and determining the positions of the target to be monitored in the identification image and the comparison image according to a preset mapping relation between each monitoring lens coordinate and the longitude and latitude coordinate, wherein the monitoring lens coordinate is a coordinate value of a monitoring camera in the monitoring area under a lens coordinate system.
4. The method according to claim 1, wherein the comparing the similarity of the feature vector to be identified and the comparison feature vector to obtain the result of cross-visual field re-identification of the object to be monitored specifically comprises:
and comparing the similarity of the feature vector to be identified and the comparison feature vector in a cosine similarity comparison mode to obtain a similarity score, and comparing the similarity score with a preset similarity threshold to obtain a cross-vision re-identification result of the target to be monitored.
5. A cross-vision person re-identification device, comprising:
the longitude and latitude coordinate processing unit is used for acquiring longitude and latitude coordinates of the target to be monitored and determining a moving path of the target to be monitored according to the longitude and latitude coordinates;
a monitoring image obtaining unit, configured to determine, according to the longitude and latitude coordinates and the moving path, a current monitoring area where the target to be monitored is located and a previous monitoring area on which the target passes, and obtain an identification image and a comparison image of the target to be monitored, where the identification image is a monitoring image of the target to be monitored in the current monitoring area, and the comparison image is a monitoring image of the target to be monitored in the previous monitoring area;
the feature extraction unit is used for respectively extracting features of the target to be monitored in the identification image and the comparison image through a preset feature identification model to obtain a feature vector to be identified and a comparison feature vector;
and the feature comparison unit is used for comparing the similarity of the feature vector to be identified and the comparison feature vector to obtain a cross-vision re-identification result of the target to be monitored.
6. The cross-vision person re-identification device of claim 5, further comprising:
and the characteristic identification model construction unit is used for acquiring a worker characteristic sample data set, inputting the worker characteristic sample data set into a preset initial deep learning model for training to obtain a characteristic identification model.
7. The cross-vision person re-identification device of claim 5, further comprising:
and the coordinate conversion unit is used for determining the positions of the target to be monitored in the identification image and the comparison image according to the preset mapping relation between each monitoring lens coordinate and the longitude and latitude coordinate, wherein the monitoring lens coordinate is a coordinate value under a lens coordinate system of a monitoring camera in the monitoring area.
8. The device of claim 5, wherein the feature matching unit is specifically configured to:
and comparing the similarity of the feature vector to be identified and the comparison feature vector in a cosine similarity comparison mode to obtain a similarity score, and comparing the similarity score with a preset similarity threshold to obtain a cross-vision re-identification result of the target to be monitored.
9. A terminal, comprising: a memory and a processor;
the memory is configured to store program code corresponding to the cross-vision person re-identification method of any one of claims 1 to 4;
the processor is configured to execute the program code.
10. A storage medium having stored therein a program code corresponding to the cross-visual-area person re-identification method according to any one of claims 1 to 4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010237294.5A CN111460977B (en) | 2020-03-30 | 2020-03-30 | Cross-view personnel re-identification method, device, terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010237294.5A CN111460977B (en) | 2020-03-30 | 2020-03-30 | Cross-view personnel re-identification method, device, terminal and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111460977A true CN111460977A (en) | 2020-07-28 |
CN111460977B CN111460977B (en) | 2024-02-20 |
Family
ID=71685067
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010237294.5A Active CN111460977B (en) | 2020-03-30 | 2020-03-30 | Cross-view personnel re-identification method, device, terminal and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111460977B (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160034988A1 (en) * | 2014-07-31 | 2016-02-04 | Internet Connectivity Group, Inc. | Merchandising communication and inventorying system |
WO2018223955A1 (en) * | 2017-06-09 | 2018-12-13 | 北京深瞐科技有限公司 | Target monitoring method, target monitoring device, camera and computer readable medium |
CN109409250A (en) * | 2018-10-08 | 2019-03-01 | 高新兴科技集团股份有限公司 | A kind of across the video camera pedestrian of no overlap ken recognition methods again based on deep learning |
CN110147471A (en) * | 2019-04-04 | 2019-08-20 | 平安科技(深圳)有限公司 | Trace tracking method, device, computer equipment and storage medium based on video |
CN110674746A (en) * | 2019-09-24 | 2020-01-10 | 视云融聚(广州)科技有限公司 | Method and device for realizing high-precision cross-mirror tracking by using video spatial relationship assistance, computer equipment and storage medium |
-
2020
- 2020-03-30 CN CN202010237294.5A patent/CN111460977B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160034988A1 (en) * | 2014-07-31 | 2016-02-04 | Internet Connectivity Group, Inc. | Merchandising communication and inventorying system |
WO2018223955A1 (en) * | 2017-06-09 | 2018-12-13 | 北京深瞐科技有限公司 | Target monitoring method, target monitoring device, camera and computer readable medium |
CN109409250A (en) * | 2018-10-08 | 2019-03-01 | 高新兴科技集团股份有限公司 | A kind of across the video camera pedestrian of no overlap ken recognition methods again based on deep learning |
CN110147471A (en) * | 2019-04-04 | 2019-08-20 | 平安科技(深圳)有限公司 | Trace tracking method, device, computer equipment and storage medium based on video |
CN110674746A (en) * | 2019-09-24 | 2020-01-10 | 视云融聚(广州)科技有限公司 | Method and device for realizing high-precision cross-mirror tracking by using video spatial relationship assistance, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111460977B (en) | 2024-02-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108509859B (en) | Non-overlapping area pedestrian tracking method based on deep neural network | |
CN106709449B (en) | Pedestrian re-identification method and system based on deep learning and reinforcement learning | |
CN109598743B (en) | Pedestrian target tracking method, device and equipment | |
CN110427905A (en) | Pedestrian tracting method, device and terminal | |
CN108416258B (en) | Multi-human body tracking method based on human body part model | |
Bedagkar-Gala et al. | Multiple person re-identification using part based spatio-temporal color appearance model | |
CN111079600A (en) | Pedestrian identification method and system with multiple cameras | |
CN110458025B (en) | Target identification and positioning method based on binocular camera | |
CN112215155A (en) | Face tracking method and system based on multi-feature fusion | |
CN111160243A (en) | Passenger flow volume statistical method and related product | |
CN111626194A (en) | Pedestrian multi-target tracking method using depth correlation measurement | |
CN113160276B (en) | Target tracking method, target tracking device and computer readable storage medium | |
CN113111844A (en) | Operation posture evaluation method and device, local terminal and readable storage medium | |
CN112132157B (en) | Gait face fusion recognition method based on raspberry pie | |
CN111263955A (en) | Method and device for determining movement track of target object | |
CN111291612A (en) | Pedestrian re-identification method and device based on multi-person multi-camera tracking | |
CN110175553B (en) | Method and device for establishing feature library based on gait recognition and face recognition | |
CN109146913B (en) | Face tracking method and device | |
CN110825916A (en) | Person searching method based on body shape recognition technology | |
CN114581990A (en) | Intelligent running test method and device | |
CN111950507B (en) | Data processing and model training method, device, equipment and medium | |
CN106934339B (en) | Target tracking and tracking target identification feature extraction method and device | |
CN112766065A (en) | Mobile terminal examinee identity authentication method, device, terminal and storage medium | |
CN115761470A (en) | Method and system for tracking motion trail in swimming scene | |
CN114092956A (en) | Store passenger flow statistical method and device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |