CN110929711B - Method for automatically associating identity information and shape information applied to fixed scene - Google Patents

Method for automatically associating identity information and shape information applied to fixed scene Download PDF

Info

Publication number
CN110929711B
CN110929711B CN201911121202.0A CN201911121202A CN110929711B CN 110929711 B CN110929711 B CN 110929711B CN 201911121202 A CN201911121202 A CN 201911121202A CN 110929711 B CN110929711 B CN 110929711B
Authority
CN
China
Prior art keywords
identity
information
scene
people
person
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911121202.0A
Other languages
Chinese (zh)
Other versions
CN110929711A (en
Inventor
徐鑫
徐晓刚
丁超辉
张华新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Yunqi Smart Vision Technology Co ltd
Original Assignee
Smart Vision Hangzhou Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Smart Vision Hangzhou Technology Development Co ltd filed Critical Smart Vision Hangzhou Technology Development Co ltd
Priority to CN201911121202.0A priority Critical patent/CN110929711B/en
Publication of CN110929711A publication Critical patent/CN110929711A/en
Application granted granted Critical
Publication of CN110929711B publication Critical patent/CN110929711B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for automatically associating identity information and shape information in a fixed scene, which comprises the following steps: 1) scene description, scene fixing, camera placing in the scene, and entrance placing for the testimony and testimony integrated equipment; 2) identity verification, wherein the collected person holds an identity card by hand, and identity card swiping verification is performed on the people-card integrated equipment; 3) identity acquisition, step 2) identity verification passes, and the identity information on the ID card can be read to the testimony of a witness unification equipment: the database building module calls and acquires identity information read by the people and certificate integrated equipment through an interface; 4) acquiring body information; 5) feature extraction and information association. The invention can effectively ensure the reality and the effectiveness of the data recorded by the body; meanwhile, the body characteristics of multiple angles can be obtained, more comprehensive body characteristic information is formed, and reliable and comprehensive basic data are provided for the follow-up retrieval of the human body through the body.

Description

Method for automatically associating identity information and shape information applied to fixed scene
Technical Field
The invention belongs to the field of artificial intelligence, and particularly relates to a method for automatically associating identity information and shape information in a fixed scene.
Background
The face recognition precision is high but the availability is low. In the weather environment such as night and rain, and various conditions such as mask and makeup, face recognition is often in a failure state, and the characteristic of high precision cannot be exerted.
The difficulty of searching people based on mass video files is high. With the gradual advance of the sky eye and snow light engineering, the distribution and coverage of the cameras are wider and wider, massive video files are generated, and how to process the massive videos is a difficult problem in front. Especially, the positioning, tracking and the like of people in massive videos are limited by environmental factors, and the difficulty is increased.
With the development of artificial intelligence deep learning, people can be searched and locked through the body information of the people, and the body characteristics can be important supplements besides the human face characteristics.
However, in the present stage, the application of body shape recognition is greatly limited due to the serious shortage of human body shape data, the number of the existing body shape data sets is small, the scale of the mainstream human body shape data set is generally small, and the body shape recognition cannot be effectively applied due to the shortage of identity information. The main reason for this phenomenon is that it is not possible to collect the physical information of the person conveniently and to correlate the personal identity information quickly. Therefore, a method for quickly and conveniently realizing association between the body shape acquisition and the body shape identity of a person is needed.
Disclosure of Invention
In order to solve the defects and shortcomings in the prior art, the invention provides a method for realizing the acquisition of body data by building complete acquisition equipment in a fixed scene, solving the problem of identity information acquisition by adopting a human-card integrated equipment, simultaneously ensuring the real corresponding relation between an identity and a person, realizing the automatic association of the identity and the body by time extension and solving the problem that the two real corresponding relations are difficult to determine; the method can solve the problem of identity acquisition, ensure the accuracy of identity information acquisition and effectively ensure the reality and the effectiveness of data recorded by the body; meanwhile, by acquiring the multi-angle video of the human body, the system can acquire the body characteristics of multiple angles, more comprehensive body characteristic information is formed, reliable and comprehensive basic data are provided for the follow-up retrieval of the human body through the body, and the method is applied to the automatic association of identity information and body information of a fixed scene.
The technical scheme of the invention is as follows: a method for automatically associating identity information and form information applied to a fixed scene comprises the following steps:
1) scene description
Fixing a scene, placing a camera in the scene, and placing a witness integration device at an entrance;
2) identity verification
The collected person holds the identity card by hand, and identity card swiping verification is carried out on the people-card integrated equipment, so that the main purpose is to confirm that the identity card holder and the identity card show the same person, if the identity card shows that the person and the identity card holder are the same person, the verification is passed, the next link can be entered, and if not, the process is ended;
3) identity acquisition
Step 2) when the identity check passes, the identity information on the identity card can be read by the people and certificate integrated equipment: the database building module calls and acquires identity information read by the people and certificate integrated equipment through an interface;
4) body information collection
After the person-certificate-integrated check is passed, the person walks into the collection area, the collected person only needs to move to the exit direction according to the normal walking state, the camera collects the person body information at the moment, the time for acquiring the identity information collected by the person-certificate-integrated equipment is taken as a starting point, the library building module carries out general moving target detection on the video until the moving target is not detected, or the time for intercepting the time for returning the identity data by the person-certificate-integrated equipment again is taken;
the database building system takes the horizontal middle position of a moving object appearing in a picture as a time origin, extends forwards and backwards for 10 seconds respectively to serve as an effective video of a body, and if the time before and after the moving object appears in the horizontal middle position of the picture is less than 10 seconds, all the moving objects serve as effective videos;
5) and feature extraction and information association, wherein the feature extraction refers to that an acquisition system performs target detection on each section of video, and anchors the corresponding position of a person by performing an anchor frame on a current frame.
Preferably, a single camera a is used in the scene of step 1), the size of the scene is 6 meters by 8 meters by 3.5 meters, and the camera is placed on the top of the central axis in the width direction.
Preferably, three cameras A, B, C are adopted in the scene in step 1), the size of the scene is 6 m × 8 m × 3.5 m, the cameras a and B are respectively located near two corners of the outlet, and the camera C is located at the top of the central axis in the width direction and used for collecting multi-angle body videos of people.
Preferably, after the verification of the person in the step 4) is passed, the person walks into the collection area, the collected person only needs to walk to the exit direction in a normal walking state, and the person appears in a video picture of the camera at the moment;
taking the time for acquiring the identity information collected by the testimony and identity unification equipment as a starting point, and carrying out general moving target detection on the video by the library building module until no moving target is detected, or taking the time for returning the identity data by the testimony and identity unification equipment again as a cut-off time;
the database building system takes the horizontal middle position of the C picture of the camera where the moving object appears as the time origin, extends forwards and backwards for 10 seconds respectively to obtain the video of the camera A, B, C as the effective video of the body, and if the time before and after the moving object appears is less than 10 seconds, all the videos are taken as the effective video.
Preferably, the target detection in step 5) has two basic steps:
a. aiming at a single frame image, based on a one-stage algorithm of a neural network, after the image is input into the neural network, the image is decoded into the position and the type of a target according to the output characteristics, and then the target with a larger overlapping area is filtered through an NMS process to obtain the final target detection position;
the method includes the steps that for a single picture, resize is 300x300 picture size, the picture is sent to a basebox which is VGG16 to perform convolution operation, after the convolution operation is performed on a plurality of layers, characteristics are extracted through ExtrafreatureLyaer to form 6 groups of tensors which are 1x512x38x38, 1x1024x19x19, 1x512x10x10, 1x256x5x5, 1x256x3x3 and 1x256x1x1 respectively, the 6 groups of tensors are spliced after the convolution operation is performed on the 6 groups of tensors respectively to obtain tensor and prediction tensor confidence coefficient of a prediction position, then softmax operation is performed on the prediction confidence coefficient to obtain 1x8732x4 tensor for final position prediction and 1x8732x21 for prediction of a final classification result, a process of decoding a target mainly depends on a preset Prior box with dimension 8732x4, and a priori position is represented as a box (a priori position is represented by a fixed box (a fixed box)cx,dcy,dw,dh) The position of the corresponding real bounding box is b ═ b (b)cx,bcy,bw,bh) Decoding to obtain the original position formula bcx=dwlcx+dcx,bcy=dylcy+dcy,bw=dwexp(lw),bh=dhexp(lh) Pre-prediction is performed on the previous 1x8732x21Sequencing the classification scores in a descending manner, carrying out NMS operation to filter redundant candidate frames, and determining which of 8732 boxes are to be used as a prediction result;
b. aiming at the interframe information, transmitting the characteristics of the last layer of the previous frame to the corresponding layer of the neural network characteristics of the current frame, then performing average pooling operation, and fusing the information of the previous frame and the next frame; the formula in which the average pooling is expressed can be:
Figure GDA0003584988990000051
wherein N represents the number of feature maps, o(n)The nth feature map is shown, and F shows the features after average pooling;
then extracting human body multi-dimensional features through a convolutional neural network, obtaining a network structure capable of extracting target features through training in a data set, inputting a target image into the neural network, and outputting a 2048-dimensional feature vector by the neural network, wherein the vector is the body feature of the target.
Preferably, the feature extraction in the step 5) adopts an improved person-reID idea, the basic network adopts a ResNet50 network structure, and the loss function utilizes a ternary loss function:
L=max(d(a,p)-d(a,n)+margin,0)
l represents a calculation method of a loss function, wherein a represents an anchor sample, p is a positive sample, and n is a negative sample; d (a, p) represents the distance between the anchor and the positive sample, d (a, n) represents the distance between the anchor and the negative sample, margin represents the boundary value, and the above formula generally indicates that the distance of the same target sample is minimized and the distance between samples of different targets is maximized.
Preferably, the information association in step 5) refers to associating the person shape information acquired in step 4) with the person identity information acquired in step 2) and the person characteristics acquired in step 5), and combining to generate a person shape library record.
According to the invention, the acquisition of body data is realized by building complete acquisition equipment in a fixed scene, the problem of identity information acquisition is solved by adopting the human-certificate integrated equipment, the real corresponding relation between the identity and personnel is ensured, the automatic association of the identity and the body is realized by time extension, and the problem that the real corresponding relation between the identity and the body is difficult to determine is solved; the method can solve the problem of identity acquisition, ensure the accuracy of identity information acquisition and effectively ensure the reality and the effectiveness of data recorded by the body; meanwhile, the multi-angle video of the human body is acquired, so that the system can acquire the body characteristics of multiple angles, more comprehensive body characteristic information is formed, and reliable and comprehensive basic data are provided for subsequent retrieval of the human body through the body.
Drawings
FIG. 1 is a schematic view illustrating a scene description of a single camera according to the present invention;
FIG. 2 is a schematic view illustrating a scene of three cameras according to the present invention;
fig. 3 is a schematic diagram of target detection performed on a single frame image according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the attached drawings, but the present invention is not limited thereto.
A method for automatically associating identity information and form information in a fixed scene is characterized by comprising the following steps: 1-1 scene description (Single camera A)
And (4) fixing a scene, suggesting the size of 6 meters by 8 meters by 3.5 meters, and placing an entrance for the testimonial evidence integrated equipment.
The video camera angle is shown in fig. 1, placed on top of the central axis in the width direction of the scene.
1-2 scene description (three cameras A, B, C)
And (4) fixing a scene, suggesting the size of 6 meters by 8 meters by 3.5 meters, and placing an entrance for the testimonial evidence integrated equipment.
The video camera angle is as shown in fig. 2, camera A and B are located near two corners of scene export respectively, and camera C places at scene width direction's the central axis top for gather personnel multi-angle physique video acquisition.
2-identity verification
The identity card is held by the collected person, the identity card is swiped and verified on the people-identity-card integrated equipment, the main purpose is to confirm that the identity card holder and the identity card show the same person, if the identity card shows that the person and the identity card holder are the same person, the verification is passed, and the next link can be entered. Otherwise, the flow ends here.
3-identity acquisition
Step 2, when the identity check passes, the identity information on the identity card can be read by the people and certificate integrated equipment: name, sex, nationality, ID card number, certificate photo, etc. and the database building module is used in calling and acquiring the ID information read by the ID-card integrated equipment.
4-1 body information acquisition (Single camera)
After the verification of the testimony of a witness passes, the personnel walk to enter the collection area, the collected personnel only need to go to the exit direction according to the normal walking state, and the camera is collecting personnel body information at the moment. And taking the time for acquiring the identity information collected by the people and evidence integration equipment as a starting point, and carrying out general moving object detection on the video by the library building module until the moving object is not detected, or taking the time for returning the identity data by the people and evidence integration equipment again as an ending time.
The database building system takes the horizontal middle position of the moving object as the time origin, extends forwards and backwards for 10 seconds respectively, and takes the moving object as the effective video of the body. If the front and back time is less than 10 seconds, all the videos are taken as effective videos.
4-2 type information acquisition (3 camera)
After the verification of the testimony of a witness passes, the personnel walk to enter the collection area, the collected personnel only need to walk to the exit direction according to the normal state, and the personnel can appear in the camera video picture at the moment.
And taking the time for acquiring the identity information collected by the people and evidence integration equipment as a starting point, and carrying out general moving object detection on the video by the library building module until the moving object is not detected, or taking the time for returning the identity data by the people and evidence integration equipment again as an ending time.
The library building system takes the horizontal middle position of the moving object appearing in the C picture of the camera as a time origin, and extends forwards and backwards for 10 seconds to obtain the video of the camera A, B, C as a body effective video. If the front and back time is less than 10 seconds, all the videos are taken as effective videos.
5-information Association
Feature extraction and information association.
5-1 the acquisition system carries out target detection to each section of video, and the target detection has two basic steps by carrying out anchor frame anchoring on the current frame and corresponding to the position of a person:
aiming at a single frame image, based on a one-stage algorithm of a neural network, after the image is input into the neural network, the image is decoded into the position and the type of a target according to the output characteristics, and then the target with a larger overlapping area is filtered through an NMS process to obtain the final target detection position;
the one-stage detection algorithm of the neural network realizes details, for a single picture, resize is 300x300 picture size, the picture is sent to a basebox, the basebox in fig. 3 is VGG16, convolution operation is performed, after several layers of convolution are performed, features are extracted through ExtraFeatureLyaer, 6 groups of tensors are formed, the 6 groups of tensors are respectively 1x512x38x38, 1x1024x19x19, 1x512x10x10, 1x256x5x5, 1x256x3x3 and 1x256x1x1, the 6 groups of tensors are spliced after convolution operation is performed respectively, a tensor and a prediction confidence tensor of a prediction position are obtained, and then softmax and other operations are performed on the prediction confidence tensor, so that the 1x8732x4 is used for a final position prediction basis and the 1x8732x21(21 is a detection target general class) is used for predicting basis of a final classification result. The process of decoding the target mainly depends on preset primer boxes with 8732x4 dimensions, and the position of the Prior box is expressed as d ═ d (d)cx,dcy,dw,dh) The position of the corresponding real bounding box is b ═ b (b)cx,bcy,bw,bh) Decoding to obtain the original position formula bcx=dwlcx+dcx,bcy=dylcy+dcy,bw=dwexp(lw),bh=dhexp(lh) Sorting the previous 1x8732x21 in descending order of the prediction classification scores and performing NMS operation to filter out redundant candidate boxes, and determining which of the 8732 boxes will be used as the prediction result。
2) Aiming at the interframe information, transmitting the characteristics of the last layer of the previous frame to the corresponding layer of the neural network characteristics of the current frame, then performing average pooling operation, and fusing the information of the previous frame and the next frame; the formula in which the average pooling is expressed can be:
Figure GDA0003584988990000091
wherein N represents the number of feature maps, o(n)The nth feature map is shown, and F shows the averaged pooled features.
The feature extraction adopts an improved person-reiD idea, a basic network adopts a ResNet50 network structure, and a loss function utilizes a ternary loss function:
L=max(d(a,p)-d(a,n)+margin,0)
l represents a calculation method of a loss function, wherein a represents an anchor sample, p is a positive sample, and n is a negative sample; d (a, p) represents the distance between the anchor and the positive sample, d (a, n) represents the distance between the anchor and the negative sample, margin represents the boundary value, and the above formula generally indicates that the distance of the same target sample is minimized and the distance between samples of different targets is maximized.
5-2 extracting the human body multi-dimensional features through a convolutional neural network. Through training in a data set, a network structure capable of extracting target features is obtained, after a target image is input into a neural network, the neural network outputs a 2048-dimensional feature vector, and the vector is the body feature of the target.
And the system associates the video acquired in the step 4 with the personnel identity information acquired in the step 5 and combines the video and the personnel identity information to generate a personnel body database record.

Claims (7)

1. A method for automatically associating identity information and form information applied to a fixed scene is characterized in that: which comprises the following steps:
1) scene description
Fixing a scene, placing a camera in the scene, and placing a witness integration device in an entrance;
2) identity verification
The collected person holds the identity card by hand, and identity card swiping verification is carried out on the people-card integrated equipment, so that the main purpose is to confirm that the identity card holder and the identity card show the same person, if the identity card shows that the person and the identity card holder are the same person, the verification is passed, the next link can be entered, and if not, the process is ended;
3) identity acquisition
Step 2) when the identity check passes, the identity information on the identity card can be read by the people and certificate integrated equipment: the database building module calls and acquires identity information read by the people and certificate integrated equipment through an interface;
4) body information collection
After the people-certificate integrated check is passed, the people walk into the collection area, the collected people only need to go to the exit direction according to the normal walking state, at the moment, the camera collects the body information of the people, the time for acquiring the identity information collected by the people-certificate integrated equipment is taken as a starting point, the library building module carries out general moving target detection on the video until the moving target is not detected, or the time for the people-certificate integrated equipment to return the identity data again is taken as the ending time;
the database building system takes the horizontal middle position of a moving object appearing in a picture as a time origin, extends forwards and backwards for 10 seconds respectively to serve as an effective video of a body, and if the time before and after the moving object appears in the horizontal middle position of the picture is less than 10 seconds, all the moving objects serve as effective videos;
5) and feature extraction and information association, wherein the feature extraction refers to the steps that an acquisition system carries out target detection on each section of video, and anchor frame anchoring is carried out on the current frame to anchor the position of a corresponding person.
2. The method for automatically associating identity information and shape information of a fixed scene according to claim 1, wherein: the method comprises the following steps that a single camera A is adopted in the scene in the step 1), the size of the scene is 6 m by 8 m by 3.5 m, and the camera is placed at the top of a central axis in the width direction.
3. The method for automatically associating identity information and shape information of a fixed scene according to claim 1, wherein: the three cameras A, B, C are adopted in the scene of the step 1), the size of the scene is 6 meters by 8 meters by 3.5 meters, the cameras A and B are respectively positioned near two corners of the outlet, and the camera C is positioned at the top of the central axis in the width direction and used for collecting multi-angle body videos of people.
4. The method for automatically associating identity information with shape information in a fixed scene according to claim 3, wherein the method comprises the following steps: after the person authentication and verification in the step 4) passes, the person walks into the collection area, the collected person only needs to walk to the exit direction in a normal walking state, and the person appears in the video picture of the camera at the moment;
taking the time for acquiring the identity information collected by the testimony and identity unification equipment as a starting point, and carrying out general moving target detection on the video by the library building module until no moving target is detected, or taking the time for returning the identity data by the testimony and identity unification equipment again as a cut-off time;
the database building system takes the horizontal middle position of the C picture of the camera where the moving object appears as the time origin, extends forwards and backwards for 10 seconds respectively to obtain the video of the camera A, B, C as the effective video of the body, and if the time before and after the moving object appears is less than 10 seconds, all the videos are taken as the effective video.
5. The method for automatically associating identity information and shape information of a fixed scene according to claim 1, wherein: the target detection in the step 5) has two basic steps:
a. aiming at a single-frame image, based on a one-stage algorithm of a neural network, after the image is input into the neural network, the image is decoded into the position and the type of a target according to the output characteristics, and then the target with a larger overlapping area is filtered through an NMS process to obtain the final target detection position;
the one-stage detection algorithm of the neural network realizes the details, and for a single picture, resize is 300x300 picture size and is sent to a basebone, and the basebone isVGG16, performing convolution operation, after several layers of convolution, extracting features through ExtrafeateLyaer to form 6 groups of tensors, namely 1x512x38x38, 1x1024x19x19, 1x512x10x10, 1x256x5x5, 1x256x3x3 and 1x256x1x1, performing convolution operation on the 6 groups of tensors respectively, splicing to obtain a tensor of a predicted position and a predicted confidence tensor, performing softmax operation on the predicted confidence to obtain a predicted basis of the 1x8732x4 for a final position and a predicted basis of the 1x8732x21 for a final classification result, wherein the process of decoding a target mainly depends on preset Prior boxes with dimensions of 8732x4, and the position of a priori box is expressed as d ═ (d ═ is expressed by the position of a priori box (d ═ is expressed by the number of the predicted basis of the predicted positions of the preset Priorfetriet boxes)cx,dcy,dw,dh) The position of the corresponding real bounding box is b ═ b (b)cx,bcy,bw,bh) Decoding to obtain the original position formula bcx=dwlcx+dcx,bcy=dylcy+dcy,bw=dwexp(lw),bh=dhexp(lh) Sorting the previous 1x8732x21 in descending order of the prediction classification scores and performing NMS operation to filter redundant candidate boxes and determine which of 8732 boxes are to be used as prediction results;
b. aiming at the interframe information, transmitting the characteristics of the last layer of the previous frame to the corresponding layer of the neural network characteristics of the current frame, then performing average pooling operation, and fusing the information of the previous frame and the next frame; the formula in which the average pooling is expressed can be:
Figure FDA0003584988980000041
wherein N represents the number of feature maps, o(n)The nth feature map is shown, and F shows the features after average pooling;
then extracting human body multi-dimensional features through a convolutional neural network, obtaining a network structure capable of extracting target features through training in a data set, inputting a target image into the neural network, and outputting a 2048-dimensional feature vector by the neural network, wherein the vector is the body feature of the target.
6. The method for automatically associating identity information and shape information in a fixed scene according to claim 5, wherein: in the step 5), the feature extraction adopts an improved person-reiD idea, the basic network adopts a ResNet50 network structure, and the loss function utilizes a ternary loss function:
L=max(d(a,p)-d(a,n)+margin,0)
l represents a calculation method of a loss function, wherein a represents an anchor sample, p is a positive sample, and n is a negative sample; d (a, p) represents the distance between the anchor and the positive sample, d (a, n) represents the distance between the anchor and the negative sample, margin represents the boundary value, and the above equation generally indicates that the distance of the same target sample is minimized and the distance between samples of different targets is maximized.
7. The method for automatically associating identity information and shape information in a fixed scene according to claim 6, wherein: the information association in the step 5) refers to associating the personnel shape information acquired in the step 4) with the personnel identity information acquired in the step 2) and the personnel characteristics acquired in the step 5), and combining to generate a personnel shape library record.
CN201911121202.0A 2019-11-15 2019-11-15 Method for automatically associating identity information and shape information applied to fixed scene Active CN110929711B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911121202.0A CN110929711B (en) 2019-11-15 2019-11-15 Method for automatically associating identity information and shape information applied to fixed scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911121202.0A CN110929711B (en) 2019-11-15 2019-11-15 Method for automatically associating identity information and shape information applied to fixed scene

Publications (2)

Publication Number Publication Date
CN110929711A CN110929711A (en) 2020-03-27
CN110929711B true CN110929711B (en) 2022-05-31

Family

ID=69853122

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911121202.0A Active CN110929711B (en) 2019-11-15 2019-11-15 Method for automatically associating identity information and shape information applied to fixed scene

Country Status (1)

Country Link
CN (1) CN110929711B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112101100B (en) * 2020-08-06 2024-03-15 鞍山极致创新科技有限公司 Biological specimen self-help collection system and collection method
CN114783037B (en) * 2022-06-17 2022-11-22 浙江大华技术股份有限公司 Object re-recognition method, object re-recognition apparatus, and computer-readable storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991390A (en) * 2017-03-30 2017-07-28 电子科技大学 A kind of hand-held testimony of a witness Compare System and method based on deep learning
CN107680218B (en) * 2017-09-26 2021-05-11 成都优易数据有限公司 Security inspection method and system based on multi-biometric feature recognition and instant license technology
CN110022552A (en) * 2018-01-08 2019-07-16 中国移动通信有限公司研究院 User identification module method for writing data, equipment, platform and storage medium
CN110033293B (en) * 2018-01-12 2023-05-26 阿里巴巴集团控股有限公司 Method, device and system for acquiring user information
CN108319930B (en) * 2018-03-09 2021-04-06 百度在线网络技术(北京)有限公司 Identity authentication method, system, terminal and computer readable storage medium
CN108710868B (en) * 2018-06-05 2020-09-04 中国石油大学(华东) Human body key point detection system and method based on complex scene
CN108960114A (en) * 2018-06-27 2018-12-07 腾讯科技(深圳)有限公司 Human body recognition method and device, computer readable storage medium and electronic equipment
CN108765678A (en) * 2018-08-24 2018-11-06 安徽时旭智能科技有限公司 A kind of testimony of a witness veritification device and method of face and identity card combination
CN110378092B (en) * 2019-07-26 2020-12-04 北京积加科技有限公司 Identity recognition system, client, server and method

Also Published As

Publication number Publication date
CN110929711A (en) 2020-03-27

Similar Documents

Publication Publication Date Title
CN109934176B (en) Pedestrian recognition system, recognition method, and computer-readable storage medium
CN109886078B (en) Retrieval positioning method and device for target object
CN105628951B (en) The method and apparatus of speed for measurement object
US9367730B2 (en) Method and system for automated face detection and recognition
CN111783576B (en) Pedestrian re-identification method based on improved YOLOv3 network and feature fusion
CN103679674B (en) Method and system for splicing images of unmanned aircrafts in real time
CN103530638B (en) Method for pedestrian matching under multi-cam
CN105631430A (en) Matching method and apparatus for face image
KR101781358B1 (en) Personal Identification System And Method By Face Recognition In Digital Image
CN112070043B (en) Feature fusion-based safety helmet wearing convolution network, training and detection method
CN107977656A (en) A kind of pedestrian recognition methods and system again
CN112287827A (en) Complex environment pedestrian mask wearing detection method and system based on intelligent lamp pole
CN110929711B (en) Method for automatically associating identity information and shape information applied to fixed scene
CN112258559B (en) Intelligent running timing scoring system and method based on multi-target tracking
CN109886321B (en) Image feature extraction method and device for fine-grained classification of icing image
WO2014100280A1 (en) Sharing photos
CN113436229A (en) Multi-target cross-camera pedestrian trajectory path generation method
CN103733225B (en) Characteristic point peer system, characteristic point counterpart method and record medium
CN110348366B (en) Automatic optimal face searching method and device
CN114943937A (en) Pedestrian re-identification method and device, storage medium and electronic equipment
Dai et al. Real-time safety helmet detection system based on improved SSD
Dousai et al. Detecting humans in search and rescue operations based on ensemble learning
Wang et al. Real-time damaged building region detection based on improved YOLOv5s and embedded system from UAV images
CN111476314B (en) Fuzzy video detection method integrating optical flow algorithm and deep learning
CN112232236B (en) Pedestrian flow monitoring method, system, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221129

Address after: 310000 Room 401, building 2, No.16, Zhuantang science and technology economic block, Xihu District, Hangzhou City, Zhejiang Province

Patentee after: Hangzhou yunqi smart Vision Technology Co.,Ltd.

Address before: 310000 room 279, building 6, No. 16, Zhuantang science and technology economic block, Zhuantang street, Xihu District, Hangzhou City, Zhejiang Province

Patentee before: Smart vision (Hangzhou) Technology Development Co.,Ltd.