WO2020207038A1 - Procédé, appareil et dispositif de comptage de personnes basés sur la reconnaissance faciale, et support d'informations - Google Patents

Procédé, appareil et dispositif de comptage de personnes basés sur la reconnaissance faciale, et support d'informations Download PDF

Info

Publication number
WO2020207038A1
WO2020207038A1 PCT/CN2019/122079 CN2019122079W WO2020207038A1 WO 2020207038 A1 WO2020207038 A1 WO 2020207038A1 CN 2019122079 W CN2019122079 W CN 2019122079W WO 2020207038 A1 WO2020207038 A1 WO 2020207038A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
portrait
result
segmented
counting
Prior art date
Application number
PCT/CN2019/122079
Other languages
English (en)
Chinese (zh)
Inventor
王金燕
Original Assignee
深圳壹账通智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳壹账通智能科技有限公司 filed Critical 深圳壹账通智能科技有限公司
Publication of WO2020207038A1 publication Critical patent/WO2020207038A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Definitions

  • This application relates to the field of artificial intelligence technology, and in particular to a method, device, device, and storage medium for counting people based on face recognition.
  • This application provides a method, device, equipment and storage medium for counting people based on face recognition, aiming to improve the efficiency and accuracy of counting people.
  • this application provides a method for counting people based on face recognition, and the method includes:
  • the step of extracting image LBP features of a plurality of the segmented images includes:
  • the step of inputting the LBP features of the image into a pre-trained portrait recognition model, and then recognizing the LBP features of the image by the portrait recognition model, before the step of outputting the recognition result further includes:
  • the sample LBP features are input into a neural network created based on TensorFlow for training to obtain the portrait recognition model, and the recognition result output by the portrait recognition model is a portrait or a non-portrait.
  • the segmented image includes a first segmented image and a second segmented image
  • the step of segmenting the video image into a picture to obtain multiple segmented images includes:
  • the statistical recognition result is the number of segmented images of the portrait, and after the step of obtaining the first population statistical result, the method further includes:
  • the method before the step of reporting the first population count result to the server according to the preset result reporting interface, the method further includes:
  • the first person counting result includes the abnormal person, after removing the number of the abnormal person from the first person counting result, a second person counting result is obtained, and the second person counting result is Report to the server.
  • the portrait recognition model records the portrait coordinates of the portrait in the segmented image whose recognition result is a portrait
  • the step of judging whether an abnormal portrait is included in the first population count result according to the crest counting method includes:
  • the number of times that the portrait coordinates appear within the preset time is greater than or equal to the number threshold, it is determined that the portrait corresponding to the image coordinates is not an abnormal portrait
  • the portrait corresponding to the image coordinates within the preset time is less than the number threshold, it is determined that the portrait corresponding to the image coordinates is an abnormal portrait, and the portrait corresponding to the image coordinates is marked as an abnormal portrait.
  • an embodiment of the present application further provides a device for counting people based on face recognition, and the device for counting people based on face recognition includes:
  • the obtaining module is used to obtain video images from the video stream when receiving the people counting instruction;
  • An extraction module configured to perform picture segmentation on the video image to obtain multiple segmented images, and extract image LBP features of the multiple segmented images
  • a recognition module configured to input LBP features of the image into a pre-trained portrait recognition model, the portrait recognition model recognizes the LBP features of the image, and outputs a recognition result;
  • a statistics module configured to count the number of the segmented images whose recognition result is a portrait, to obtain a first population statistics result
  • the extraction module is also used for:
  • an embodiment of the present application further provides a device for counting people based on face recognition.
  • the device for counting people based on face recognition includes a processor, a memory, and a face recognition-based device stored in the memory.
  • People counting computer readable instructions when the computer readable instructions for counting people based on face recognition are executed by the processor, implement the steps of the method for counting people based on face recognition as described above.
  • an embodiment of the present application further provides a computer storage medium, the computer storage medium stores a computer readable instruction for counting people based on face recognition, and the computer readable instruction for counting people based on face recognition
  • the steps of the method for counting people based on face recognition as described above are realized when the processor is running.
  • the present application proposes a method, device, device, and storage medium for counting people based on face recognition.
  • the method includes: obtaining a video image from a video stream when receiving a people counting instruction; The video image is segmented to obtain multiple segmented images, and the image LBP features of the multiple segmented images are extracted; the image LBP features are input to a pre-trained portrait recognition model, and the portrait recognition model The LBP feature of the image is recognized, and the recognition result is output; the recognition result is the number of segmented images of the portrait, and the first people count result is obtained.
  • This application is based on artificial intelligence and uses image processing technology to count the number of people in the video, thereby greatly improving the efficiency and accuracy of people counting.
  • FIG. 1 is a schematic diagram of the hardware structure of a people counting device based on face recognition according to various embodiments of the present application;
  • FIG. 2 is a schematic flowchart of a first embodiment of a method for counting people based on face recognition in this application;
  • FIG. 3 is a schematic flowchart of a second embodiment of a method for counting people based on face recognition in this application;
  • Fig. 4 is a schematic diagram of functional modules of a first embodiment of a device for counting people based on face recognition according to the present application.
  • the person counting device based on face recognition mainly involved in the embodiments of the present application refers to a network connection device that can realize network connection.
  • the person counting device based on face recognition may be a server, a cloud platform, and the like.
  • FIG. 1 is a schematic diagram of the hardware structure of a people counting device based on face recognition according to various embodiments of the present application.
  • the device for counting people based on face recognition may include a processor 1001 (for example, a central processor Processing Unit, CPU), communication bus 1002, input port 1003, output port 1004, memory 1005.
  • the communication bus 1002 is used to realize the connection and communication between these components; the input port 1003 is used for data input; the output port 1004 is used for data output.
  • the memory 1005 can be a high-speed RAM memory or a stable memory (non-volatile memory). memory), such as a disk memory.
  • the memory 1005 may optionally be a storage device independent of the aforementioned processor 1001.
  • the hardware structure shown in FIG. 1 does not constitute a limitation to the present application, and may include more or less components than those shown in the figure, or combine certain components, or different component arrangements.
  • the memory 1005 as a readable storage medium in FIG. 1 may include an operating system, a network communication module, an application program module, and computer readable instructions for counting people based on face recognition.
  • the network communication module is mainly used to connect to the server and perform data communication with the server; and the processor 1001 can call the computer-readable instructions for counting people based on face recognition stored in the memory 1005, and execute the instructions provided in the embodiments of the present application. A method of counting people based on face recognition.
  • the embodiment of the present application provides a method for counting people based on face recognition.
  • FIG. 2 is a schematic flowchart of a first embodiment of a method for counting people based on face recognition in this application.
  • the method for counting people based on face recognition is applied to a device for counting people based on face recognition, and the method includes:
  • Step S101 when receiving a number counting instruction, obtain a video image from a video stream
  • a camera is installed in the people counting area where the people counting is needed in advance, and the people counting area is captured by the camera to obtain and save the video stream in real time. For example, install a camera at a certain location in the conference room to capture information such as scenes and people in the conference room, and save the current video stream of the conference room.
  • the video image is obtained from the video stream.
  • the people counting instruction includes a time point, and the time point may be the current time, the historical time, and the scheduled future time.
  • the video stream has a time stamp, and the video image corresponding to the time point is obtained according to the time stamp.
  • Step S102 Perform picture segmentation on the video image to obtain multiple segmented images, and extract image LBP features of the multiple segmented images;
  • the video image needs to be segmented twice to obtain the first segmented image and the second segmented image.
  • the step of performing picture segmentation on the video image to obtain multiple segmented images includes:
  • Step S102-1a compressing the video image into a compressed video image of 512 ⁇ 512 pixels
  • the video image is compressed to obtain the compressed video image of 512 ⁇ 512 pixels. Understandably, in other embodiments, the video image may be compressed according to other pixels.
  • step S102-1b the compressed video image is segmented into pictures by 64 ⁇ 64 pixels to obtain multiple first segmented images
  • Performing a first segmentation on the compressed video image segmenting the compressed video image by 64 ⁇ 64 pixels to obtain multiple first segmented images.
  • Step S102-1c Perform secondary image segmentation on the overlapping area of adjacent segmented images in the first segmented image with 64 pixels as a starting point to obtain a second segmented image.
  • the overlapping area of adjacent segmented images in the first segmented image is segmented twice with 64 pixels as the starting point to obtain the second segmented image.
  • the segmented image includes the first segmented image and the second segmented image.
  • LBP Local Binary Patterns (local binary mode) is an operator used to describe the characteristics of the local texture of an image, which has the characteristic of gray invariance.
  • the step of extracting image LBP features of a plurality of the segmented images includes:
  • Step S102-2a Divide the segmented image into multiple regions
  • the segmented image is divided into a plurality of regions of a preset size, for example, into a plurality of regions of 16 ⁇ 16.
  • Step S102-2b comparing the central gray value of each pixel in each region with the gray values of 8 adjacent pixels adjacent to the pixel to obtain the LBP feature of the pixel;
  • the position of the adjacent pixel is marked as 1; if the adjacent gray value is less than or equal to the central gray value , The position of the adjacent pixel is marked as 0; in this way, comparing with 8 points in the neighborhood of 3*3 can generate an 8-bit binary number (usually convertible to a decimal number, that is, the LBP feature, the LBP The value is an integer between 1 and 256), thereby obtaining the LBP feature of the pixel.
  • Step S102-2c Obtain a histogram of each area based on the LBP feature of the pixel
  • Step S102-2d Perform normalization processing on the histogram of each area to obtain a statistical histogram, and obtain the image LBP feature of the segmented image based on the statistical histogram.
  • the normalized image can better reflect the texture of each typical area, while at the same time diminishing the smoothness.
  • the histogram of each region is normalized to obtain a statistical histogram, and the image LBP feature of the segmented image is obtained based on the statistical histogram.
  • the binary system is rotated.
  • the initial LBP feature obtained at the beginning is 10010000, then after the initial feature is rotated clockwise, it can be converted to the minimum value of 00001001.
  • Value form so that the decimal value of the minimum form is the smallest, that is, the LBP is the smallest. No matter how the segmented image is rotated, the LBP is the smallest, which can ensure that the LBP has rotation invariance.
  • Step S103 Input the LBP feature of the image into a pre-trained portrait recognition model, and the LBP feature of the image is recognized by the portrait recognition model, and the recognition result is output;
  • Step S103a Collect a preset number of sample images, and set the label of the sample images as portrait or non-portrait;
  • the sample image includes a portrait sample image and a non-portrait sample image
  • the portrait sample image includes a human face sample image and a human upper body sample image.
  • 100,000 pieces of the face sample images are collected, 50,000 pieces of the upper body sample images of the person are collected, and labels are set for the 100,000 pieces of the face sample images and 50,000 pieces of the upper body sample images of the person As a portrait.
  • the upper body image of the person when the face in the video image is occluded, the number of people can be counted according to the characteristics of the upper body image, which can prevent inaccurate statistical results and lack of people. .
  • using non-personal images as training samples can make the trained person-recognition model recognize non-personal images and make the statistical results more accurate.
  • Step S103b compress the sample image into 128 ⁇ 128 pixels, and then perform grayscale processing and random incomplete processing to obtain a processed sample image;
  • the sample image is first compressed into 128 ⁇ 128 pixels to obtain a compressed sample image. Then, the compressed sample image is gray-scale processed by one of image inversion and logarithmic transformation to obtain a gray-scale sample image. Then, the grayscale sample image is subjected to random incomplete processing using an image repair method to obtain the processed sample image.
  • Step S103c extract the sample LBP feature of the processed sample image, and obtain the sample LBP feature
  • the processed sample image is divided into a plurality of sample regions, and the sample center gray value of each sample pixel in each sample region is divided into the gray value of the 8 sample adjacent pixels adjacent to the sample pixel.
  • the degree value is compared to obtain the sample LBP feature of the sample pixel; the sample histogram of each sample area is obtained based on the LBP feature of the sample pixel; the sample histogram of each sample area is normalized
  • a statistical sample histogram is obtained by transformation processing, and a sample image LBP feature of the sample image is obtained based on the sample statistical histogram.
  • Step S103d input the sample LBP features into a neural network created based on TensorFlow for training to obtain the portrait recognition model, and the recognition result output by the portrait recognition model is a portrait or a non-portrait.
  • the TensorFlow is an open source machine learning framework, and TensorFlow is widely used in programming implementation of various machine learning algorithms. Using TensorFlow can help developers build models in extreme codes and make the products they need based on the models.
  • the sample LBP features are input into a neural network created based on TensorFlow for training. After repeated training for millions of times, the sample LBP features can be accurately classified according to the labels of the corresponding sample images, thus
  • the portrait recognition model is obtained, and the recognition result output by the portrait recognition model is a portrait or a non-portrait, that is, the recognition result of a sample image labeled as a portrait in the sample image is output as a portrait, and the label in the sample image is The recognition result of the non-personal sample image is output as a non-personal image.
  • Step S104 Count the number of the segmented images whose recognition result is a portrait, and obtain a first population count result.
  • the recognition result output by the portrait recognition model the number of the segmented images whose recognition result is the portrait is counted, and the number of the segmented images is taken as the first person counting result.
  • This embodiment uses the above solution to obtain a video image from a video stream when receiving a people counting instruction; segment the video image to obtain multiple segmented images, and extract the number of segmented images.
  • Image LBP features input the image LBP features into a pre-trained portrait recognition model, the portrait recognition model recognizes the image LBP features, and outputs the recognition result; the statistical recognition result is the number of segmented images of the portrait, Obtain the first number of people counting results. Therefore, based on artificial intelligence, image processing technology is used to count the number of people in the video, which greatly improves the efficiency and accuracy of people counting.
  • the second embodiment of the present application proposes a method for counting people based on face recognition.
  • the statistical recognition result is the number of segmented images of portraits.
  • the step of obtaining the first number of people counting result it also includes:
  • Step S106 According to a preset result reporting interface, the first number of people counting results are reported to the server.
  • a reporting interface is preset, and the reporting interface is used for network communication with the server. Understandably, the reporting interface may also report camera information, area information, time information, etc., corresponding to the first people counting result to the server.
  • the step S106 before the step of reporting the first population count result to the server according to the preset result reporting interface, the method further includes:
  • the video images in the video stream change in real time, the video images obtained from the video stream may not be stable enough, and may cause the first portrait statistics result due to people walking, posture changes, etc. Not accurate enough.
  • the video image is extracted from the video stream according to a preset duration, and the preset duration may be 100ms, 200ms, etc.
  • the video image is segmented to obtain multiple segmented images, and the image LBP features of the multiple segmented images are extracted; the image LBP features are input to a pre-trained portrait recognition model, and the The portrait recognition model recognizes the LBP features of the image, and outputs the portrait coordinates of the portrait in the segmented image.
  • the preset time may be 1 minute, and the number of times The threshold may be 4 times, 10 times, etc. For example, if the number of times the portrait coordinate appears in 1 minute is 10 times, it means that the portrait corresponding to the image coordinate is not an abnormal portrait. If the number of occurrences of the portrait image coordinates within the preset time is less than the number threshold, it is determined that the portrait corresponding to the image coordinates is an abnormal portrait, and the portrait corresponding to the image coordinates is marked as an abnormal portrait.
  • the step is performed: according to the preset result reporting interface, the first people counting result is reported to the server; if the first people counting result includes all the If the abnormal portrait is described, after removing the number of the abnormal portrait from the first population statistics result, a second population statistics result is obtained, and the second population statistics result is reported to the server. If there are two abnormal figures, the second statistical result is obtained after subtracting 2 from the first statistical result.
  • This embodiment uses the above solution to obtain a video image from a video stream when receiving a people counting instruction; segment the video image to obtain multiple images, and extract multiple images of the segmented images Image LBP features; input the image LBP features into a pre-trained portrait recognition model, the portrait recognition model recognizes the image LBP features, and outputs the recognition result; the statistical recognition result is the number of segmented images of the portrait, Obtain the first population statistics result; report the first population statistics result to the server according to the preset result reporting interface. Therefore, based on artificial intelligence, image processing technology is used to count the number of people in the video, which greatly improves the efficiency and accuracy of people counting.
  • this embodiment also provides a device for counting people based on face recognition.
  • Fig. 4 is a schematic diagram of the functional modules of the first embodiment of the device for counting people based on face recognition in this application.
  • the device for counting people based on face recognition is a virtual device, which is stored in the memory 1005 of the device for counting people based on face recognition shown in FIG. 1 to realize the computer readable people counting device based on face recognition. All functions of the instruction: when receiving the people counting instruction, obtain the video image from the video stream; use to segment the video image to obtain multiple segmented images, and extract multiple segments The image LBP features of the image; used to input the image LBP features into a pre-trained portrait recognition model, and the portrait recognition model recognizes the image LBP features, and outputs the recognition results; used to count the recognition results for all the portraits State the number of segmented images, and obtain the first person counting result.
  • the device for counting people based on face recognition in this embodiment includes:
  • the obtaining module 10 is configured to obtain a video image from a video stream when a number counting instruction is received;
  • the extraction module 20 is configured to perform picture segmentation on the video image, obtain multiple segmented images, and extract image LBP features of the multiple segmented images;
  • the recognition module 30 is configured to input the LBP features of the image into a pre-trained portrait recognition model, and the portrait recognition model recognizes the LBP features of the image, and outputs a recognition result;
  • the statistics module 40 is configured to count the number of the segmented images whose recognition result is a portrait, and obtain a first population statistics result.
  • identification module is also used for:
  • the sample LBP features are input into a neural network created based on TensorFlow for training to obtain the portrait recognition model, and the recognition result output by the portrait recognition model is a portrait or a non-portrait.
  • extraction module is also used for:
  • extraction module is also used for:
  • the statistics module is also used for:
  • the statistics module is also used for:
  • the first person counting result includes the abnormal person, after removing the number of the abnormal person from the first person counting result, a second person counting result is obtained, and the second person counting result is Report to the server.
  • the statistics module is also used for:
  • the number of times that the portrait coordinates appear within the preset time is greater than or equal to the number threshold, it is determined that the portrait corresponding to the image coordinates is not an abnormal portrait
  • the portrait corresponding to the image coordinates within the preset time is less than the number threshold, it is determined that the portrait corresponding to the image coordinates is an abnormal portrait, and the portrait corresponding to the image coordinates is marked as an abnormal portrait.
  • the present application also provides a computer storage medium that stores a computer readable instruction for counting people based on face recognition.
  • a computer readable instruction for counting people based on face recognition is run by a processor The steps for realizing the method for counting people based on face recognition as described above will not be repeated here.
  • the computer-readable storage medium may be a non-volatile readable storage medium.
  • the present application proposes a method, device, device, and storage medium for counting people based on face recognition.
  • the method includes: obtaining a video image from a video stream when receiving a people counting instruction; The video image is segmented to obtain multiple segmented images, and the image LBP features of the segmented images are extracted; the image LBP features are input into a pre-trained portrait recognition model, and the portrait recognition model The LBP feature of the image is recognized, and the recognition result is output; the recognition result is the number of segmented images of the portrait, and the first people count result is obtained.
  • This application is based on artificial intelligence and uses image processing technology to count the number of people in the video, thereby greatly improving the efficiency and accuracy of people counting.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé, un appareil et un dispositif de comptage de personnes basés sur la reconnaissance faciale, et un support d'informations. Le procédé consiste : à obtenir, lorsqu'une instruction de comptage de personnes est reçue, une image vidéo à partir d'un flux vidéo (S101) ; à effectuer une segmentation d'image sur l'image vidéo pour obtenir de multiples images segmentées, et extraire des caractéristiques LBP d'image des multiples images segmentées (S102) ; à entrer les caractéristiques LBP d'image dans un modèle de reconnaissance d'image humaine pré-entraîné, effectuer une reconnaissance sur les caractéristiques LBP d'image par le modèle de reconnaissance d'image humaine, et délivrer en sortie des résultats de reconnaissance (S103) ; et à compter le nombre d'images segmentées, le résultat de reconnaissance étant une image humaine pour obtenir un premier résultat de comptage de personnes (S104). Selon le procédé, des personnes dans une vidéo sont comptées au moyen d'une technologie de traitement d'image sur la base d'une intelligence artificielle, améliorant ainsi considérablement l'efficacité et la précision de comptage de personnes.
PCT/CN2019/122079 2019-04-12 2019-11-29 Procédé, appareil et dispositif de comptage de personnes basés sur la reconnaissance faciale, et support d'informations WO2020207038A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910297454.2 2019-04-12
CN201910297454.2A CN110163092A (zh) 2019-04-12 2019-04-12 基于人脸识别的人数统计方法、装置、设备及存储介质

Publications (1)

Publication Number Publication Date
WO2020207038A1 true WO2020207038A1 (fr) 2020-10-15

Family

ID=67639302

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/122079 WO2020207038A1 (fr) 2019-04-12 2019-11-29 Procédé, appareil et dispositif de comptage de personnes basés sur la reconnaissance faciale, et support d'informations

Country Status (2)

Country Link
CN (1) CN110163092A (fr)
WO (1) WO2020207038A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822111A (zh) * 2021-01-19 2021-12-21 北京京东振世信息技术有限公司 人群检测模型训练方法、装置以及人群计数方法、装置
CN114462653A (zh) * 2022-01-24 2022-05-10 广东天地和实业控股集团有限公司 一种o2o式的校园自助餐饮管理方法、系统、设备及介质
CN116128883A (zh) * 2023-04-19 2023-05-16 尚特杰电力科技有限公司 一种光伏板数量统计方法、装置、电子设备及存储介质
CN113822111B (zh) * 2021-01-19 2024-05-24 北京京东振世信息技术有限公司 人群检测模型训练方法、装置以及人群计数方法、装置

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110163092A (zh) * 2019-04-12 2019-08-23 深圳壹账通智能科技有限公司 基于人脸识别的人数统计方法、装置、设备及存储介质
CN111199215A (zh) * 2020-01-06 2020-05-26 郑红 基于人脸识别的人数统计方法和装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
MX2012013865A (es) * 2012-11-29 2014-05-28 Vision Holdings Mexico S De R L De C V Sistema y metodo de pago y conteo para transporte.
KR101500496B1 (ko) * 2013-12-06 2015-03-10 주식회사 케이티 얼굴을 인식하는 장치 및 방법
CN105160313A (zh) * 2014-09-15 2015-12-16 中国科学院重庆绿色智能技术研究院 视频监控中人群行为分析的方法及装置
CN108563997A (zh) * 2018-03-16 2018-09-21 新智认知数据服务有限公司 一种建立人脸检测模型、人脸识别的方法和装置
CN108805140A (zh) * 2018-05-23 2018-11-13 国政通科技股份有限公司 一种基于lbp的特征快速提取方法及人脸识别系统
CN109344765A (zh) * 2018-09-28 2019-02-15 广州云从人工智能技术有限公司 一种针对连锁门店入店人员分析的智能分析方法
CN110163092A (zh) * 2019-04-12 2019-08-23 深圳壹账通智能科技有限公司 基于人脸识别的人数统计方法、装置、设备及存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778512A (zh) * 2016-11-25 2017-05-31 南京蓝泰交通设施有限责任公司 一种基于lbp和深度学校的非限制条件下人脸识别方法
CN109359548B (zh) * 2018-09-19 2022-07-08 深圳市商汤科技有限公司 多人脸识别监控方法及装置、电子设备及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
MX2012013865A (es) * 2012-11-29 2014-05-28 Vision Holdings Mexico S De R L De C V Sistema y metodo de pago y conteo para transporte.
KR101500496B1 (ko) * 2013-12-06 2015-03-10 주식회사 케이티 얼굴을 인식하는 장치 및 방법
CN105160313A (zh) * 2014-09-15 2015-12-16 中国科学院重庆绿色智能技术研究院 视频监控中人群行为分析的方法及装置
CN108563997A (zh) * 2018-03-16 2018-09-21 新智认知数据服务有限公司 一种建立人脸检测模型、人脸识别的方法和装置
CN108805140A (zh) * 2018-05-23 2018-11-13 国政通科技股份有限公司 一种基于lbp的特征快速提取方法及人脸识别系统
CN109344765A (zh) * 2018-09-28 2019-02-15 广州云从人工智能技术有限公司 一种针对连锁门店入店人员分析的智能分析方法
CN110163092A (zh) * 2019-04-12 2019-08-23 深圳壹账通智能科技有限公司 基于人脸识别的人数统计方法、装置、设备及存储介质

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822111A (zh) * 2021-01-19 2021-12-21 北京京东振世信息技术有限公司 人群检测模型训练方法、装置以及人群计数方法、装置
CN113822111B (zh) * 2021-01-19 2024-05-24 北京京东振世信息技术有限公司 人群检测模型训练方法、装置以及人群计数方法、装置
CN114462653A (zh) * 2022-01-24 2022-05-10 广东天地和实业控股集团有限公司 一种o2o式的校园自助餐饮管理方法、系统、设备及介质
CN114462653B (zh) * 2022-01-24 2022-09-30 广东天地和实业控股集团有限公司 一种o2o式的校园自助餐饮管理方法、系统、设备及介质
CN116128883A (zh) * 2023-04-19 2023-05-16 尚特杰电力科技有限公司 一种光伏板数量统计方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN110163092A (zh) 2019-08-23

Similar Documents

Publication Publication Date Title
WO2020207038A1 (fr) Procédé, appareil et dispositif de comptage de personnes basés sur la reconnaissance faciale, et support d'informations
WO2020034526A1 (fr) Procédé d'inspection de qualité, appareil, dispositif et support de stockage informatique pour l'enregistrement d'une assurance
WO2018018771A1 (fr) Procédé et système de photographie à deux caméras
WO2017010695A1 (fr) Appareil de génération de contenu tridimensionnel et procédé de génération de contenu tridimensionnel associé
WO2020073495A1 (fr) Procédé, appareil et dispositif de réexamen basés sur l'intelligence artificielle, et support d'informations
WO2020082562A1 (fr) Procédé, appareil, dispositif et support de mémoire d'identification de symbole
WO2020207030A1 (fr) Procédé, système et dispositif de codage vidéo, et support de stockage lisible par ordinateur
WO2021012508A1 (fr) Procédé, appareil et dispositif de reconnaissance d'image d'ia, ainsi que support d'informations
WO2021003930A1 (fr) Procédé d'inspection de qualité, appareil et dispositif pour audio de service après-vente, et support d'informations lisible par ordinateur
WO2019203528A1 (fr) Appareil électronique et procédé de commande associé
WO2018008881A1 (fr) Dispositif terminal et serveur de service, procédé et programme de fourniture d'un service d'analyse de diagnostic exécutés par ledit dispositif, et support d'enregistrement lisible par ordinateur sur lequel est enregistré ledit programme
WO2015137666A1 (fr) Appareil de reconnaissance d'objet et son procédé de commande
WO2020015060A1 (fr) Procédé et appareil d'estimation d'anomalie de consommation d'énergie, et support d'enregistrement informatique
WO2018131875A1 (fr) Appareil d'affichage, et procédé pour fournir un service associé
WO2018223520A1 (fr) Procédé et dispositif d'apprentissage orienté vers les enfants, et support de stockage
WO2020253115A1 (fr) Procédé, appareil et dispositif de recommandation de produit basés sur une reconnaissance vocale et support de stockage
WO2010041836A2 (fr) Procédé de détection d'une zone de couleur peau à l'aide d'un modèle de couleur de peau variable
WO2013085278A1 (fr) Dispositif de surveillance faisant appel à un modèle d'attention sélective et procédé de surveillance associé
WO2013165048A1 (fr) Système de recherche d'image et serveur d'analyse d'image
WO2020253048A1 (fr) Procédé, appareil et dispositif de reconnaissance d'image basés sur un apprentissage profond, et support de stockage
WO2022114731A1 (fr) Système de détection de comportement anormal basé sur un apprentissage profond et procédé de détection pour détecter et reconnaître un comportement anormal
WO2020233055A1 (fr) Procédé, appareil et dispositif de promotion de produit basés sur une détection d'animation et support de stockage
WO2020186774A1 (fr) Procédé et appareil de positionnement basé sur la détection d'image, et dispositif et support de stockage
WO2020080734A1 (fr) Procédé de reconnaissance faciale et dispositif de reconnaissance faciale
WO2022114639A1 (fr) Dispositif pour assurer l'équité d'un ensemble de données d'apprentissage d'intelligence artificielle sur la base d'une analyse d'association de sous-ensemble multidimensionnel, et procédé pour assurer l'équité d'un ensemble de données d'apprentissage d'intelligence artificielle l'utilisant

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19924405

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 02/02/2022)

122 Ep: pct application non-entry in european phase

Ref document number: 19924405

Country of ref document: EP

Kind code of ref document: A1