CN113095289A - Massive image preprocessing network method based on urban complex scene - Google Patents

Massive image preprocessing network method based on urban complex scene Download PDF

Info

Publication number
CN113095289A
CN113095289A CN202110487070.4A CN202110487070A CN113095289A CN 113095289 A CN113095289 A CN 113095289A CN 202110487070 A CN202110487070 A CN 202110487070A CN 113095289 A CN113095289 A CN 113095289A
Authority
CN
China
Prior art keywords
network
image
personnel
face
preprocessing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110487070.4A
Other languages
Chinese (zh)
Inventor
董毅
余缘超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Dianzheng Information Technology Co ltd
Original Assignee
Chongqing Dianzheng Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Dianzheng Information Technology Co ltd filed Critical Chongqing Dianzheng Information Technology Co ltd
Publication of CN113095289A publication Critical patent/CN113095289A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a massive image preprocessing network method based on a complex scene of a city, which is characterized in that a massive image filtering preprocessing network system is built, an SSIM network for evaluating image definition, a 3DDFA face reconstruction network for judging the rotation angle of a face and an MTCNN network for judging the face shielding rate are utilized, the three image information judging networks are combined into a preprocessing system, and the three judging networks can be freely matched according to different users of an actual application scene, so that the extraction of personnel characteristic information is pertinently and effectively improved during detection and identification, and the accuracy of the detection function is enhanced. By analyzing the data obtained by the personnel behavior detection module, the time of the personnel staying in a certain specific area exceeds a threshold value, an alarm is given in time, and the track of the personnel is counted.

Description

Massive image preprocessing network method based on urban complex scene
Technical Field
The invention discloses a network preprocessing method based on massive images in a complex scene of a city, and belongs to the field of face recognition and intelligent safety.
Background
At present, under a complex scene of a city, each camera generates a mass of images at any time and directly transmits the mass of image data to various detection and identification networks without screening in a certain specific mode, so that the accuracy of system detection and identification cannot be improved, and a lot of misjudgments can be obtained. Namely, a large amount of useless data needs to be filtered in advance and then transmitted into various detection and identification networks to achieve the expected effect. At present, a preprocessing network for filtering massive images is lacked in a complex environment of a city, so that many networks obtain many misjudgment results in practical application. Nowadays, a massive image filtering preprocessing network system combining various image quality evaluation networks is used, and massive images are filtered by combining some deep learning model frameworks and an image preprocessing network, so that the misjudgment situation is reduced, and the accuracy of detection and identification is improved.
Disclosure of Invention
Aiming at the problems that a large amount of useless data exist in mass data generated in a complex urban scene and are directly sent into a detection and identification network, so that the system detection precision is not high, the identification implementation is not accurate and the like, the invention provides a mass image preprocessing network method based on the urban complex scene, a mass image filtering preprocessing network system is built, and an SSIM network for evaluating the image definition is utilized; the 3DDFA face reconstruction network judges the face rotation angle; and judging the MTCNN network of the face shielding rate. The three image information evaluation networks are combined into a preprocessing system, and the three evaluation networks can be freely matched according to different users of actual application scenes, so that the extraction of the personnel characteristic information is pertinently and effectively improved during detection and identification, and the accuracy of the detection function is enhanced. The technical scheme of the invention is as follows:
the device comprises a data acquisition module, a preprocessing module and a data output processing module.
The data acquisition module is used for acquiring mass data,
the preprocessing module is connected with the data acquisition module and is used for preprocessing the data acquired by the data acquisition module;
the data output processing module is connected with the preprocessing module and is used for identifying the face image processed by the preprocessing module.
The preprocessing module comprises at least one of an SSIM image definition evaluation network, a 3DDFA face rotation angle judgment network and a face shielding rate judgment network, the three judgment networks can be freely matched according to different users of an actual application scene, and if definition preprocessing is needed, the three judgment networks are added into the SSIM network; if the face needs to be rotated by an angle and processed, adding a 3DDFA network; and if the face occlusion rate is required to be preprocessed, adding the MTCNN network. And in the actual process, the three networks can be freely combined according to actual needs.
The SSIM image definition evaluation network is used for judging the definition of a mass of images according to the requirement of the detection and identification network on the image definition and filtering useless image data which do not meet the requirement according to the judgment result; and carrying out Gaussian processing on all mass image data, forming data with the original image, sending the data into an SSIM evaluation network, defining brightness and contrast related to an object structure as structural information in the image, calculating to obtain an SSIM image definition evaluation value, and filtering the image with definition lower than a certain threshold value so as to ensure that the accuracy of the face detection and recognition network is improved to a certain extent after the image data evaluation network passes through.
The 3DDFA face rotation angle judging network is used for judging the definition of a mass of images according to the requirements of the identifying network on the face rotation angle and filtering useless image data which do not meet the requirements according to the judging result; the key points of the human face are reconstructed in a 3D space, the rotation angle (Euler angle) of the human face is calculated by combining with triangular calculation, and the image with the rotation angle higher than a threshold value is filtered, so that the accuracy of the human face detection and identification network is improved to a certain extent after the image data evaluation network is passed.
The MTCNN face shielding rate judging network is used for judging the shielding rate of the massive images according to the requirement of the recognition network on the face shielding rate and filtering useless data which do not meet the requirement according to the judging result; and detecting the key points of the human face, calculating the confidence sum of the key points, and filtering according to the image with the confidence sum lower than a threshold value so as to ensure that the accuracy of the human face detection and identification network is improved to a certain extent after the image data evaluation network passes through.
The data output processing module comprises a personnel contour positioning module and a personnel abnormal behavior detection and identification module.
The personnel contour positioning module is connected with the image preprocessing module, and separates the image background from the personnel key information image through a foreground and background network separation technology, so that the influence of partial information of the image background on the accuracy of the abnormal behavior network of the detected personnel is avoided; separating foreground from background by using an Opencv-based frame difference method; detecting a changed area in two adjacent frames of images, carrying out difference on two continuous frames of images in an image sequence, and then binarizing the gray difference image to extract motion information; detecting and dividing an image obtained by interframe change area, distinguishing a background area and a personnel motion area, and further extracting a personnel target to be detected;
the personnel abnormal behavior detection and identification module is connected with the personnel contour positioning module, and a CNN detection network structure and an SSD identification network are combined to carry out detection statistics on action tracks and residence time of personnel in a certain specified area; extracting main characteristic information of personnel in the image through a CNN network, respectively inputting the characteristic information of each layer into a feather-map, extracting personnel contour information through the feather-map overlapped by a plurality of characteristic layers, and detecting and counting the detention time and the track of the personnel in a certain specific area; by analyzing the data obtained by the personnel behavior detection module, if the detention time of the personnel in a certain specific area exceeds a threshold value, alarming is timely carried out and the track of the personnel is counted.
Compared with the prior art, the invention has the advantages that:
the SSIM image clarity evaluation network is used for automatically filtering the fuzzy key frames and only keeping clear effective data, so that the extraction of personnel characteristic information is effectively improved, the accuracy of the detection function is enhanced, and the efficiency of implementing the alarm function is improved.
Drawings
FIG. 1 is a block diagram of the present invention.
Fig. 2 is a structure diagram of the SSIM image definition evaluation network according to the present invention.
Fig. 3 is a structure diagram of a 3DDFA face rotation angle judgment network according to the present invention.
Fig. 4 is a flow chart of 3DDFA face rotation angle evaluation network calculation in the embodiment of the present invention.
FIG. 5 is a diagram of a MTCNN face occlusion rate judging network according to the present invention.
FIG. 6 is a flow chart of a MTCNN face occlusion rate evaluation network in the embodiment of the present invention.
FIG. 7 is a diagram of extracting person contour information by the feather-map of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without inventive effort based on the embodiments of the present invention, are within the scope of the present invention. The specific technical scheme is as follows:
the invention provides a massive image preprocessing network method based on a complex scene of a city, which is used for constructing a massive image filtering preprocessing network system and utilizing an SSIM network for evaluating the image definition; the 3DDFA face reconstruction network judges the face rotation angle; the MTCNN network for judging the human face shielding rate combines the three image information judging networks into a preprocessing system, and the three judging networks can be freely matched according to different users of actual application scenes, so that the extraction of the personnel characteristic information is pertinently and effectively improved during detection and identification, and the accuracy of the detection function is enhanced. The technical scheme of the invention is as follows:
the device comprises a data acquisition module, a preprocessing module and a data output processing module.
A data acquisition module for acquiring mass data,
the preprocessing module is connected with the data acquisition module and is used for preprocessing the data acquired by the data acquisition module;
and the data output processing module and the preprocessing module are used for identifying and detecting the face image processed by the preprocessing module.
The data output processing module comprises a personnel outline positioning module and a personnel abnormal behavior detection and identification module.
In this embodiment, a security system for detecting suspicious people in a cell is taken as an example, and a data acquisition module acquires face data of people in the cell and sends the face data to a preprocessing module for processing.
As shown in fig. 1, the preprocessing module includes at least one of an SSIM image definition evaluation network, a 3DDFA face rotation angle evaluation network, and a face occlusion rate evaluation network, and the three evaluation networks can be freely collocated by different users according to actual application scenes, and if the definition preprocessing is required, the three evaluation networks are added into the SSIM network; if the face needs to be rotated by an angle and processed, adding a 3DDFA network; and if the face occlusion rate is required to be preprocessed, adding the MTCNN network. And in the actual process, the three networks can be freely combined according to actual needs. In the embodiment, a 3DDFA face rotation angle judging network and an MTCNN face shielding rate judging network are adopted in the preprocessing module according to actual needs.
As shown in fig. 2, the SSIM image sharpness evaluation network is configured to perform sharpness determination on a large number of images according to the requirement of the detection and identification network on image sharpness and filter out useless image data that does not meet the requirement according to the determination result; and carrying out Gaussian processing on all mass image data, forming data with the original image, sending the data into an SSIM evaluation network, defining brightness and contrast related to an object structure as structural information in the image, calculating to obtain an SSIM image definition evaluation value, and filtering the image with definition lower than a certain threshold value so as to ensure that the accuracy of the face detection and recognition network is improved to a certain extent after the image data evaluation network passes through.
As shown in fig. 3, the 3DDFA face rotation angle evaluation network is configured to perform definition evaluation on a large number of images according to the requirement of the identification network on the face rotation angle, and filter out useless image data that does not meet the requirement according to the evaluation result; the key points of the human face are reconstructed in a 3D space, the rotation angle (Euler angle) of the human face is calculated by combining with triangular calculation, and the image with the rotation angle higher than a threshold value is filtered, so that the accuracy of the human face detection and identification network is improved to a certain extent after the image data evaluation network is passed. As shown in fig. 4, in this embodiment, the system performs convolution layer by layer, then connects all the layers, and finally performs 68 keypoint detection and calculates the rotation angle (euler angle) of the face.
As shown in fig. 5, the MTCNN face occlusion rate evaluation network is configured to perform occlusion rate evaluation on a large number of images according to requirements of the recognition network on the face occlusion rate and filter out useless data that does not meet the requirements according to the evaluation result; and detecting the key points of the human face, calculating the confidence sum of the key points, and filtering according to the image with the confidence sum lower than a threshold value so as to ensure that the accuracy of the human face detection and identification network is improved to a certain extent after the image data evaluation network passes through. As shown in fig. 6, in the system of this embodiment, MTCNN face reconstruction, 68 key point detections, and confidence calculation are implemented through convolution operation, so as to perform face feature detection.
The personnel contour positioning module is connected with the image preprocessing module, and separates the image background from the personnel key information image through a foreground and background network separation technology, so that the influence on the accuracy of the behavior network of the detected personnel due to partial information of the image background is avoided. Separating foreground from background by using an Opencv-based frame difference method; the changed area in the two adjacent frames of images is detected. The method is to use two continuous frames of images in an image sequence to carry out difference, and then binarize the gray difference image to extract motion information. And detecting and dividing the image obtained by the interframe change area, distinguishing a background area and a personnel motion area, and further extracting a personnel target to be detected.
The personnel behavior detection module is connected with the personnel contour positioning module, and a CNN detection network structure and an SSD identification network are combined to detect and count the action track and abnormal residence time of personnel in a certain specified area; as shown in fig. 7, in the present embodiment, the system extracts the main feature information of the person in the image through the dark net-53 network, and inputs the feature information of each layer into the feather-map respectively to extract the contour information of the person through the feather-map with multiple feature layers superimposed, and detects and counts the abnormal residence time and the track trajectory of the person in a certain specific area; by analyzing the data obtained by the personnel behavior detection module, if the detention time of the personnel in a certain specific area exceeds a threshold value, alarming is timely carried out and the track of the personnel is counted. In this embodiment, the security guard finds the suspicious people through the system and should count and track in time.
Compared with the prior art, the invention has the advantages that:
the automatic filtration only keeps clear effective data, effectively improves the extraction of personnel characteristic information, enhances the accuracy of the detection function and improves the efficiency of implementing the alarm function.
The above is a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, modifications or equivalent substitutions of the technical solution of the present invention without inventive work may be made without departing from the scope of the present invention.

Claims (6)

1. A network preprocessing method based on massive images in complex urban scenes comprises a data acquisition module, a preprocessing module and a data output processing module, and is characterized in that,
the data acquisition module is used for acquiring mass data;
the preprocessing module is connected with the data acquisition module and is used for preprocessing the data acquired by the data acquisition module;
the data output processing module is connected with the preprocessing module and used for identifying and detecting the face image processed by the preprocessing module;
the preprocessing module comprises at least one of an SSIM image definition evaluation network, a 3DDFA face rotation angle judgment network and an MTCNN face shielding rate judgment network;
the SSIM image definition evaluation network is used for judging the definition of a mass of images according to the requirement of the detection and identification network on the image definition and filtering useless image data which do not meet the requirement according to the judgment result;
the 3DDFA face rotation angle judging network is used for judging the definition of a mass of images according to the requirements of the identification network on the face rotation angle and filtering useless image data which do not meet the requirements according to the judging result;
the MTCNN face shielding rate judging network is used for judging the shielding rate of the massive images according to the requirement of the recognition network on the face shielding rate and filtering useless data which do not meet the requirement according to the judging result;
the data output processing module comprises a personnel outline positioning module and a personnel abnormal behavior detection and identification module;
the personnel contour positioning module is connected with the image preprocessing module, and is used for separating an image background from a personnel key information image through a foreground and background network separation technology and extracting a personnel target to be detected;
the personnel abnormal behavior detection and identification module is connected with the personnel contour positioning module and is used for detecting and counting the action track and the detention time of a personnel in a certain specified area; by analyzing data obtained by the personnel detention detection module, if the detention time of the personnel in a certain specific area exceeds a threshold value, alarming is timely carried out and the track of the personnel is counted.
2. The network method for preprocessing the massive images under the complex urban scene as claimed in claim 1, wherein the SSIM image sharpness evaluation network performs gaussian processing on all massive image data and forms data with the original image to be sent into the SSIM evaluation network, and calculates SSIM image sharpness evaluation value and filters the image with sharpness lower than a certain threshold by using brightness and contrast related to the object structure as the definition of structure information in the image.
3. The network method for preprocessing the massive images under the complex urban scene according to claim 1, wherein the 3D DDFA face rotation angle judgment network reconstructs key points of the face in a 3D space, calculates the rotation angle of the face by combining triangular calculation, and filters the images with the rotation angle higher than a threshold value.
4. The method as claimed in claim 1, wherein the MTCNN face occlusion rate evaluation network detects key points of a face and calculates a confidence sum of the key points, and filters the key points according to an image with the confidence sum lower than a threshold.
5. The network method for preprocessing massive images under the complex urban scene as claimed in claim 1, wherein the personnel contour positioning module performs foreground and background separation by using a frame difference method based on Opencv, detects the changed area in two adjacent frames of images, performs difference on two continuous frames of images in an image sequence, then binarizes the gray difference image to extract the motion information, and distinguishes the background area and the personnel motion area by the image obtained by detecting and dividing the changed area between frames.
6. The massive image preprocessing network method based on the urban complex scene as claimed in claim 1, wherein the person abnormal behavior detection and identification module extracts the main feature information of the person in the image through a CNN network, and inputs the feature information of each layer into a feather-map respectively to extract the contour information of the person through the feather-map superposed by a plurality of feature layers and detect and count the staying time and the track of the person in a certain specific area.
CN202110487070.4A 2020-10-28 2021-05-10 Massive image preprocessing network method based on urban complex scene Pending CN113095289A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202022429396 2020-10-28
CN2020224293965 2020-10-28

Publications (1)

Publication Number Publication Date
CN113095289A true CN113095289A (en) 2021-07-09

Family

ID=76681228

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110487070.4A Pending CN113095289A (en) 2020-10-28 2021-05-10 Massive image preprocessing network method based on urban complex scene

Country Status (1)

Country Link
CN (1) CN113095289A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110150341A1 (en) * 2009-12-18 2011-06-23 Canon Kabushiki Kaisha Image processing apparatus and image processing method
CN102637257A (en) * 2012-03-22 2012-08-15 北京尚易德科技有限公司 Video-based detection and recognition system and method of vehicles
JP2013239797A (en) * 2012-05-11 2013-11-28 Canon Inc Image processing device
CN103745235A (en) * 2013-12-18 2014-04-23 小米科技有限责任公司 Human face identification method, device and terminal device
CN107545545A (en) * 2016-09-12 2018-01-05 郑州蓝视科技有限公司 Monitoring alarm method based on video detection technology
US20180285651A1 (en) * 2017-03-31 2018-10-04 International Business Machines Corporation Image processing to identify selected individuals in a field of view
CN108710856A (en) * 2018-05-22 2018-10-26 河南亚视软件技术有限公司 A kind of face identification method based on video flowing
CN109658572A (en) * 2018-12-21 2019-04-19 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN109740501A (en) * 2018-12-28 2019-05-10 广东亿迅科技有限公司 A kind of Work attendance method and device of recognition of face
CN109902598A (en) * 2019-02-01 2019-06-18 北京清帆科技有限公司 A kind of Preprocessing Technique for complex background
CN110796094A (en) * 2019-10-30 2020-02-14 上海商汤智能科技有限公司 Control method and device based on image recognition, electronic equipment and storage medium
CN111385440A (en) * 2018-12-27 2020-07-07 芜湖潜思智能科技有限公司 Monitoring camera with face recording and inquiring functions

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110150341A1 (en) * 2009-12-18 2011-06-23 Canon Kabushiki Kaisha Image processing apparatus and image processing method
CN102637257A (en) * 2012-03-22 2012-08-15 北京尚易德科技有限公司 Video-based detection and recognition system and method of vehicles
JP2013239797A (en) * 2012-05-11 2013-11-28 Canon Inc Image processing device
CN103745235A (en) * 2013-12-18 2014-04-23 小米科技有限责任公司 Human face identification method, device and terminal device
CN107545545A (en) * 2016-09-12 2018-01-05 郑州蓝视科技有限公司 Monitoring alarm method based on video detection technology
US20180285651A1 (en) * 2017-03-31 2018-10-04 International Business Machines Corporation Image processing to identify selected individuals in a field of view
CN108710856A (en) * 2018-05-22 2018-10-26 河南亚视软件技术有限公司 A kind of face identification method based on video flowing
CN109658572A (en) * 2018-12-21 2019-04-19 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN111385440A (en) * 2018-12-27 2020-07-07 芜湖潜思智能科技有限公司 Monitoring camera with face recording and inquiring functions
CN109740501A (en) * 2018-12-28 2019-05-10 广东亿迅科技有限公司 A kind of Work attendance method and device of recognition of face
CN109902598A (en) * 2019-02-01 2019-06-18 北京清帆科技有限公司 A kind of Preprocessing Technique for complex background
CN110796094A (en) * 2019-10-30 2020-02-14 上海商汤智能科技有限公司 Control method and device based on image recognition, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN106203274B (en) Real-time pedestrian detection system and method in video monitoring
US10445567B2 (en) Pedestrian head identification method and system
CN104751136B (en) A kind of multi-camera video event back jump tracking method based on recognition of face
Navalgund et al. Crime intention detection system using deep learning
KR102197946B1 (en) object recognition and counting method using deep learning artificial intelligence technology
CN106997629A (en) Access control method, apparatus and system
EP2662827B1 (en) Video analysis
CN112016353B (en) Method and device for carrying out identity recognition on face image based on video
CN112287816A (en) Dangerous working area accident automatic detection and alarm method based on deep learning
KR100983777B1 (en) Image capture system for object recognitions and method for controlling the same
CN111325048B (en) Personnel gathering detection method and device
CN103065121A (en) Engine driver state monitoring method and device based on video face analysis
CN112396658A (en) Indoor personnel positioning method and positioning system based on video
CN113297926B (en) Behavior detection and recognition method and system
CN111192461B (en) License plate recognition method, server, parking charging method and system
CN111753651A (en) Subway group abnormal behavior detection method based on station two-dimensional crowd density analysis
CN110991397A (en) Traveling direction determining method and related equipment
US20180046866A1 (en) Method of Detecting a Moving Object by Reconstructive Image Processing
CN112132873A (en) Multi-lens pedestrian recognition and tracking based on computer vision
CN105957300B (en) A kind of wisdom gold eyeball identification is suspicious to put up masking alarm method and device
CN113963301A (en) Space-time feature fused video fire and smoke detection method and system
CN109299700A (en) Subway group abnormality behavioral value method based on crowd density analysis
KR101651410B1 (en) Violence Detection System And Method Based On Multiple Time Differences Behavior Recognition
CN115035564A (en) Face recognition method, system and related components based on intelligent patrol car camera
CN113095289A (en) Massive image preprocessing network method based on urban complex scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210709