WO2022160202A1 - Procédé et appareil d'inspection de port de masque, dispositif terminal et support de stockage lisible - Google Patents

Procédé et appareil d'inspection de port de masque, dispositif terminal et support de stockage lisible Download PDF

Info

Publication number
WO2022160202A1
WO2022160202A1 PCT/CN2021/074221 CN2021074221W WO2022160202A1 WO 2022160202 A1 WO2022160202 A1 WO 2022160202A1 CN 2021074221 W CN2021074221 W CN 2021074221W WO 2022160202 A1 WO2022160202 A1 WO 2022160202A1
Authority
WO
WIPO (PCT)
Prior art keywords
mask
wearing
face
image
probability value
Prior art date
Application number
PCT/CN2021/074221
Other languages
English (en)
Chinese (zh)
Inventor
韩永刚
郭之先
黄凯明
Original Assignee
深圳市锐明技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市锐明技术股份有限公司 filed Critical 深圳市锐明技术股份有限公司
Priority to PCT/CN2021/074221 priority Critical patent/WO2022160202A1/fr
Priority to CN202180000114.4A priority patent/CN112912893A/zh
Publication of WO2022160202A1 publication Critical patent/WO2022160202A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Definitions

  • the present application relates to the technical field of image processing, and in particular to a detection method, device, terminal device and readable storage medium for wearing a mask.
  • the related methods of detecting whether people wear masks in a standardized way require a lot of human resources or computing resources, the detection efficiency is low, and the accuracy of the detection results is not high.
  • One of the purposes of the embodiments of the present application is to provide a detection method, device, terminal device and readable storage medium for wearing a mask, aiming to solve the method for detecting whether people wear masks in a standard manner, which requires a large amount of human resources or computing. resources, low detection efficiency and low accuracy of detection results.
  • a detection method for wearing a mask including:
  • the mask image is processed to obtain a detection result of whether the user corresponding to the face contour is wearing a mask in a standard manner.
  • the described to-be-recognized image is processed to obtain a mask image containing the outline of a human face, including:
  • the to-be-recognized image is input into the face segmentation network model for processing to obtain a mask image containing the outline of the face.
  • the process of processing the mask image to obtain a detection result of whether the user corresponding to the face profile is wearing a mask according to regulations includes:
  • the mask image is processed by the mask-wearing recognition network model to obtain the output result of the mask-wearing recognition network model;
  • the detection result of whether the user corresponding to the face contour is wearing a mask is determined.
  • the output result includes a first probability value corresponding to the face contour that the user wears a mask according to regulations, a second probability value corresponding to the face contour that the user does not wear a mask according to regulations, and a second probability value corresponding to the face contour.
  • the detection result of determining whether the user corresponding to the profile of the human face is wearing a mask according to the output result including:
  • the second probability value is greater than the first probability value and the third probability value, it is determined that the detection result is that the user does not wear a mask properly;
  • the third probability value is greater than the first probability value and the second probability value, it is determined that the detection result is that the user does not wear a mask.
  • the method further includes:
  • the convolutional neural network model is pre-trained according to the mask image training data to obtain the mask-wearing recognition network model.
  • the method further includes:
  • the training image data is image data comprising a human face
  • the semantic segmentation model is pre-trained through the training image data to obtain the face segmentation network model.
  • the method further includes:
  • a detection device for wearing a mask including:
  • a first acquisition module used for acquiring the image to be recognized
  • the image processing module is used to process the to-be-recognized image to obtain a mask image containing the outline of the face;
  • the detection module is configured to process the mask image to obtain a detection result of whether the user corresponding to the face contour is wearing a mask in a standard manner.
  • a terminal device including a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the first method described above when the processor executes the computer program.
  • the detection method of wearing a mask described in any one of the aspects.
  • a computer-readable storage medium stores a computer program, and when the computer program is executed by the processor, the mask-wearing method according to any one of the above-mentioned first aspects is realized. Detection method.
  • a fifth aspect provides a computer program product that, when the computer program product runs on a terminal device, enables the terminal device to execute the method for detecting wearing a mask according to any one of the first aspects above.
  • the beneficial effect of the detection method for wearing a mask is that: by processing the image to be recognized, a mask image containing the outline of a human face is obtained, and the mask image is detected by a mask-wearing detection network model to obtain whether the user is The probability of wearing a mask is standardized, so as to determine whether the user wears a mask in a standardized manner, which reduces the amount of calculation and improves the detection efficiency and the accuracy of the detection results.
  • Fig. 1 is the schematic flow sheet of the detection method of wearing a mask provided by the embodiment of the present application
  • step S103 of the detection method for wearing a mask provided by the embodiment of the present application
  • step S1032 of the detection method for wearing a mask provided by the embodiment of the present application
  • Fig. 4 is another schematic flow chart of the detection method of wearing a mask provided by the embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of a detection device for wearing a mask provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • Some embodiments of the present application provide a detection method for wearing a mask, which can be applied to terminal devices such as mobile phones, tablet computers, wearable devices, vehicle-mounted devices, and notebook computers.
  • terminal devices such as mobile phones, tablet computers, wearable devices, vehicle-mounted devices, and notebook computers.
  • the embodiments of the present application do not impose any restrictions on the specific types of terminal devices.
  • FIG. 1 shows a schematic flow chart of the detection method for wearing a mask provided by the present application.
  • the method can be applied to the above-mentioned notebook computer.
  • the user is photographed by a preset camera, and the to-be-recognized image data including the user's face is obtained.
  • S102 Process the to-be-recognized image to obtain a mask image containing the outline of a human face.
  • a face segmentation network model is used to process a to-be-recognized image containing a human face, and a mask image containing the user's face contour output by the face segmentation network model is obtained.
  • S103 Process the mask image to obtain a detection result corresponding to the face contour of whether the user wears a mask in a standard manner.
  • the mask image containing the user's face contour is processed by the mask-wearing recognition network model, and the detection result of whether the user corresponding to the face contour output by the mask-wearing recognition network model is standard wearing a mask is obtained.
  • the detection results are set to include the probability values of the user wearing a mask, the user wearing a mask, and the user not wearing a mask.
  • the user's standard wearing of a mask refers to the situation where the user wears a mask according to medical regulations
  • the user's non-standard wearing of a mask refers to the user wearing a mask, but not covering important parts such as the mouth and nose in accordance with medical regulations.
  • the step S102 includes:
  • the to-be-recognized image is input into the face segmentation network model for processing to obtain a mask image containing the outline of the face.
  • the to-be-recognized image obtained by shooting is input into a face segmentation network model, and the to-be-recognized image is processed by the face segmentation network model to obtain a mask image containing the user's face contour.
  • the face segmentation network model includes but is not limited to a semantic segmentation (semantic segmentation) network model.
  • the area of the face contour included in the mask image can be set according to actual needs.
  • the above-mentioned face contour may refer to the contour of the entire face area, or only include the contour of a part of the face area that is used to identify whether the user is wearing a mask properly (generally, the area that identifies whether the user is wearing a mask is under the eyes). face area).
  • the step S103 includes:
  • the mask map containing the face contour is processed through the mask-wearing recognition network model to obtain the probability value of whether the user outputted by the mask-wearing recognition network wears a mask and whether the mask is regulated.
  • the detection result of whether the face contour corresponds to the standard wearing of the mask.
  • the wearing mask recognition network model includes but is not limited to the convolutional neural network model (Convolutional Neural Network model). Networks, CNN).
  • the detection result of whether the user corresponding to the face contour is regulated wearing a mask is determined according to the output result, including:
  • the output result includes a first probability value corresponding to the face contour that the user wears a mask according to regulations, a second probability value corresponding to the face contour that the user does not wear a mask according to regulations, and a second probability value corresponding to the face contour.
  • the third probability value that the user corresponding to the face contour does not wear a mask.
  • the output result of the mask-wearing recognition network model includes a first probability value corresponding to the face contour that the user wears a mask according to regulations, a second probability value corresponding to the face contour that the user does not wear a mask according to regulations, and a face contour corresponding to the user wearing a mask.
  • the step S1032 includes:
  • the third probability value is greater than the first probability value and the second probability value, determine that the detection result is that the user does not wear a mask.
  • the detection result is determined to be the user Standardize wearing of masks; in the detected output results, when the second probability value is greater than the first probability value and the third probability value (that is, the second probability value is about to be detected to be the largest), it is determined that the test result is that the user does not wear masks properly; In the output result, when the third probability value is greater than the first probability value and the second probability value (that is, the third probability value is detected to be the largest), it is determined that the detection result is that the user does not wear a mask.
  • the output result of the mask-wearing recognition network model is [0.1, 0.8, 0.1]
  • it is determined that the detection result is that the user does not wear a mask properly.
  • the method further includes:
  • the face image data includes face image data for wearing a mask, face image data for not wearing a mask, and face image data for not wearing a mask;
  • the convolutional neural network model is pre-trained according to the mask image training data to obtain the mask-wearing recognition network model.
  • a large amount of face image data is obtained, the face image data is processed according to the face segmentation network model, and the corresponding mask image training data containing the face contour is obtained.
  • the neural network model is pre-trained to obtain a mask-wearing recognition network model, so that the mask-wearing recognition network model can process the input image, and obtain the corresponding first probability value of wearing a mask, the second probability value of non-standard wearing a mask, and The third probability value for not wearing a mask.
  • the face image data includes the face image data of standard wearing masks, the face image data of non-standard wearing masks, and the face image data of not wearing masks.
  • the method after processing the face image data according to the face segmentation network model and obtaining the corresponding mask image training data, the method includes: adding a corresponding mask image training data according to the type of each face image data labels to facilitate pre-training of convolutional neural network models based on face segmentation image data.
  • the mask image training data when processing the face image data that is regulated wearing masks according to the face segmentation network model, and obtaining the corresponding mask image training data, the mask image training data should be labeled as "regular wearing masks";
  • the face segmentation network model processes the face image data that does not wear masks in a standardized manner, and when obtaining the corresponding mask image training data, the mask image training data should be labeled as "non-standard wearing masks";
  • the model processes the face image data without a mask, and when the corresponding mask image training data is obtained, the mask image training data should be labeled "without a mask”.
  • pre-training the semantic segmentation network model includes: calculating the loss through a segmentation loss function (such as a segmentation loss function based on cross entropy), and performing gradient backpropagation on the loss through a gradient descent algorithm to update the computational semantics
  • the weight parameters of each layer in the segmentation network model are obtained until the entire semantic segmentation network model converges, and the pre-trained face segmentation network model is obtained.
  • the method further includes:
  • the training image data is image data comprising a human face
  • the semantic segmentation model is pre-trained through the training image data to obtain the face segmentation network model.
  • a large amount of image data containing faces is obtained as training image data, and the semantic segmentation network model is pre-trained through the above training image data to obtain a face segmentation network model, so that the face segmentation network model is used for input
  • the mask image containing the outline of the face is output.
  • the loss function of the set face segmentation network model includes but is not limited to the segmentation loss function and the classification loss function
  • the loss function of the mask-wearing recognition network model includes but is not limited to the classification loss function.
  • the semantic segmentation network model and the convolutional neural network model are pre-integrated into a network model, and then the semantic segmentation network model in the model is pre-trained, and after the face segmentation network model is obtained, then The convolutional neural network model in the model is pre-trained to obtain the convolutional neural network model.
  • step S103 after the step S103, it further includes:
  • the face recognition algorithm is used to perform face recognition on the image to be recognized, and the face recognition result of the user in the image to be recognized is determined to facilitate notification.
  • the user wears a mask according to regulations, and carries out corresponding follow-up treatment.
  • a mask image containing the outline of a human face is obtained by processing the image to be recognized, and the mask image is detected by a mask-wearing detection network model to obtain the probability of whether the user wears a mask properly, so as to determine whether the user wears a mask properly.
  • the detection result is reduced, the calculation amount is reduced, and the detection efficiency and the detection result accuracy are improved.
  • FIG. 5 shows a structural block diagram of the detection device for wearing a mask provided by the embodiment of the present application. part.
  • the detection device for wearing a mask includes: a processor, wherein the processor is used to execute the following program modules stored in the memory: a first acquisition module, used to acquire an image to be recognized; an image processing module, used to treat The identification image is processed to obtain a mask image containing the outline of the face; the detection module is used to process the mask image to obtain a detection result of whether the user corresponding to the face outline is wearing a mask in a standard manner.
  • the detection device 100 wearing a mask includes:
  • the first acquisition module 101 is used to acquire the image to be recognized
  • the image processing module 102 is used for processing the image to be recognized to obtain a mask image containing the outline of the human face;
  • the detection module 103 is configured to process the mask image to obtain a detection result of whether the user corresponding to the face contour is wearing a mask in a standard manner.
  • the image processing module 102 includes:
  • the first processing unit is configured to input the to-be-recognized image into a face segmentation network model for processing, and obtain a mask image including a face contour.
  • the detection module 103 includes:
  • a second processing unit configured to process the mask image through the mask-wearing recognition network model to obtain an output result of the mask-wearing recognition network model
  • a determination unit configured to determine, according to the output result, a detection result of whether the user corresponding to the face contour is wearing a mask in a standard manner.
  • the output result includes a first probability value corresponding to the face contour that the user wears a mask according to regulations, a second probability value corresponding to the face contour that the user does not wear a mask according to regulations, and a second probability value corresponding to the face contour.
  • the third probability value that the user corresponding to the face contour does not wear a mask.
  • the determining unit includes:
  • a first detection subunit configured to determine that the detection result is the user's standard wearing when it is detected that the first probability value is greater than the second probability value and the third probability value in the output result Face mask;
  • a second detection subunit configured to determine that the detection result is that the user is not standardized when the second probability value is greater than the first probability value and the third probability value in the output result wear a mask
  • a third detection sub-unit configured to determine that the detection result is that the user is not wearing when the third probability value is greater than the first probability value and the second probability value in the output result Face mask.
  • the detection device 100 for wearing a mask further includes:
  • the second acquisition module is used to acquire a plurality of face image data; wherein, the face image data includes face image data of standard wearing masks, face image data of non-standard wearing masks, and face image data of not wearing masks ;
  • a preprocessing module configured to process the face image data according to the face segmentation network model to obtain corresponding mask image training data
  • the first training module is used to pre-train the convolutional neural network model according to the mask image training data to obtain the mask-wearing recognition network model.
  • the detection device 100 for wearing a mask further includes:
  • the third acquisition module is used for acquiring training image data; wherein, the training image data is image data including human faces;
  • the second training module is used for pre-training the semantic segmentation model through the training image data to obtain the face segmentation network model.
  • the detection device 100 for wearing a mask further includes:
  • the face recognition module is configured to perform face recognition on the to-be-recognized image and determine the user's face recognition result when the detection result is that the user does not wear a mask in a standard manner or the user does not wear a mask.
  • a mask image containing the outline of a human face is obtained by processing the image to be recognized, and the mask image is detected by a mask-wearing detection network model to obtain the probability of whether the user wears a mask properly, so as to determine whether the user wears a mask properly.
  • the detection result is reduced, the calculation amount is reduced, and the detection efficiency and the detection result accuracy are improved.
  • FIG. 6 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • the terminal device 6 in this embodiment includes: at least one processor 60 (only one is shown in FIG. 6 ), a processor, a memory 61 , and a processor stored in the memory 61 and can be processed in the at least one processor
  • the computer program 62 running on the processor 60 when the processor 60 executes the computer program 62, implements the steps in any of the above-mentioned embodiments of the detection method for wearing a mask.
  • the terminal device 6 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the terminal device may include, but is not limited to, a processor 60 and a memory 61 .
  • FIG. 6 is only an example of the terminal device 6, and does not constitute a limitation on the terminal device 6, and may include more or less components than the one shown, or combine some components, or different components , for example, may also include input and output devices, network access devices, and the like.
  • the so-called processor 60 may be a central processing unit (Central Processing Unit, CPU), and the processor 60 may also be other general-purpose processors, digital signal processors (Digital Signal Processors, DSP), application specific integrated circuits (Application Specific Integrated Circuits) Integrated Circuit, ASIC), off-the-shelf programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the memory 61 may be an internal storage unit of the terminal device 6 in some embodiments, such as a hard disk or a memory of the terminal device 6 . In other embodiments, the memory 61 may also be an external storage device of the terminal device 6, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital Card (Secure Digital, SD), Flash Card (Flash Card), etc.
  • the memory 61 may also include both an internal storage unit of the terminal device 6 and an external storage device.
  • the memory 61 is used to store an operating system, an application program, a boot loader (BootLoader), data, and other programs, such as program codes of the computer program.
  • the memory 61 can also be used to temporarily store data that has been output or will be output.
  • Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed by a processor, the steps in the foregoing method embodiments can be implemented.
  • the embodiments of the present application provide a computer program product, when the computer program product runs on a mobile terminal, the steps in the foregoing method embodiments can be implemented when the mobile terminal executes the computer program product.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as an independent product, may be stored in a computer-readable storage medium.
  • all or part of the processes in the methods of the above embodiments can be implemented by a computer program to instruct the relevant hardware.
  • the computer program can be stored in a computer-readable storage medium, and the computer program When executed by the processor, the steps of the above-mentioned various method embodiments may be implemented.
  • the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file or some intermediate form, and the like.
  • the computer-readable medium may include at least: any entity or device capable of carrying computer program codes to the photographing device/terminal device, recording medium, computer memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), electrical carrier signals, telecommunication signals, and software distribution media.
  • ROM read-only memory
  • RAM random access memory
  • electrical carrier signals telecommunication signals
  • software distribution media For example, U disk, mobile hard disk, disk or CD, etc.
  • computer readable media may not be electrical carrier signals and telecommunications signals.
  • the disclosed apparatus/network device and method may be implemented in other manners.
  • the apparatus/network device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, such as multiple units. Or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Geometry (AREA)
  • Evolutionary Biology (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

La présente demande divulgue un procédé et un appareil d'inspection de port de masque, un dispositif terminal et un support de stockage lisible. Ledit procédé comprend les étapes consistant à : acquérir une image à reconnaître ; traiter ladite image pour obtenir une image de masque comprenant un contour de visage ; et traiter l'image de masque afin d'obtenir un résultat d'inspection déterminant si un utilisateur correspondant au contour de visage porte un masque d'une manière standard. Selon la présente invention, une image à reconnaître est traitée pour obtenir une image de masque comprenant un contour de visage, et l'image de masque est inspectée au moyen d'un modèle de réseau d'inspection de port de masque pour obtenir la probabilité qu'un utilisateur porte un masque d'une manière standard, ce qui permet de déterminer un résultat d'inspection déterminant si l'utilisateur porte le masque de manière standard, de réduire la quantité de calcul et d'améliorer l'efficacité d'inspection et la précision du résultat d'inspection.
PCT/CN2021/074221 2021-01-28 2021-01-28 Procédé et appareil d'inspection de port de masque, dispositif terminal et support de stockage lisible WO2022160202A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/074221 WO2022160202A1 (fr) 2021-01-28 2021-01-28 Procédé et appareil d'inspection de port de masque, dispositif terminal et support de stockage lisible
CN202180000114.4A CN112912893A (zh) 2021-01-28 2021-01-28 佩戴口罩的检测方法、装置、终端设备及可读存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/074221 WO2022160202A1 (fr) 2021-01-28 2021-01-28 Procédé et appareil d'inspection de port de masque, dispositif terminal et support de stockage lisible

Publications (1)

Publication Number Publication Date
WO2022160202A1 true WO2022160202A1 (fr) 2022-08-04

Family

ID=76109083

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/074221 WO2022160202A1 (fr) 2021-01-28 2021-01-28 Procédé et appareil d'inspection de port de masque, dispositif terminal et support de stockage lisible

Country Status (2)

Country Link
CN (1) CN112912893A (fr)
WO (1) WO2022160202A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115116122A (zh) * 2022-08-30 2022-09-27 杭州魔点科技有限公司 一种基于双分支协同监督的口罩识别方法和系统
CN116051467A (zh) * 2022-12-14 2023-05-02 东莞市人民医院 基于多任务学习的膀胱癌肌层侵犯预测方法及相关装置

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115619410B (zh) * 2022-10-19 2024-01-26 闫雪 自适应金融支付平台

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010157073A (ja) * 2008-12-26 2010-07-15 Fujitsu Ltd 顔認識装置、顔認識方法及び顔認識プログラム
CN111444869A (zh) * 2020-03-31 2020-07-24 高新兴科技集团股份有限公司 一种口罩佩戴状态识别方法装置和计算机设备
CN111523380A (zh) * 2020-03-11 2020-08-11 浙江工业大学 一种基于人脸和姿态识别的口罩佩戴情况监测方法
CN111523476A (zh) * 2020-04-23 2020-08-11 北京百度网讯科技有限公司 口罩佩戴识别方法、装置、设备和可读存储介质
CN112183471A (zh) * 2020-10-28 2021-01-05 西安交通大学 一种现场人员防疫口罩规范佩戴的自动检测方法及系统

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111783601B (zh) * 2020-06-24 2024-04-26 北京百度网讯科技有限公司 人脸识别模型的训练方法、装置、电子设备及存储介质
CN111931707A (zh) * 2020-09-16 2020-11-13 平安国际智慧城市科技股份有限公司 基于对抗补丁的人脸图像预测方法、装置、设备和介质
CN112052839B (zh) * 2020-10-10 2021-06-15 腾讯科技(深圳)有限公司 图像数据处理方法、装置、设备以及介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010157073A (ja) * 2008-12-26 2010-07-15 Fujitsu Ltd 顔認識装置、顔認識方法及び顔認識プログラム
CN111523380A (zh) * 2020-03-11 2020-08-11 浙江工业大学 一种基于人脸和姿态识别的口罩佩戴情况监测方法
CN111444869A (zh) * 2020-03-31 2020-07-24 高新兴科技集团股份有限公司 一种口罩佩戴状态识别方法装置和计算机设备
CN111523476A (zh) * 2020-04-23 2020-08-11 北京百度网讯科技有限公司 口罩佩戴识别方法、装置、设备和可读存储介质
CN112183471A (zh) * 2020-10-28 2021-01-05 西安交通大学 一种现场人员防疫口罩规范佩戴的自动检测方法及系统

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115116122A (zh) * 2022-08-30 2022-09-27 杭州魔点科技有限公司 一种基于双分支协同监督的口罩识别方法和系统
CN115116122B (zh) * 2022-08-30 2022-12-16 杭州魔点科技有限公司 一种基于双分支协同监督的口罩识别方法和系统
CN116051467A (zh) * 2022-12-14 2023-05-02 东莞市人民医院 基于多任务学习的膀胱癌肌层侵犯预测方法及相关装置
CN116051467B (zh) * 2022-12-14 2023-11-03 东莞市人民医院 基于多任务学习的膀胱癌肌层侵犯预测方法及相关装置

Also Published As

Publication number Publication date
CN112912893A (zh) 2021-06-04

Similar Documents

Publication Publication Date Title
WO2022160202A1 (fr) Procédé et appareil d'inspection de port de masque, dispositif terminal et support de stockage lisible
WO2021057848A1 (fr) Procédé d'entraînement de réseau, procédé de traitement d'image, réseau, dispositif terminal et support
WO2021184727A1 (fr) Procédé et appareil de détection d'anomalies dans des données, dispositif électronique et support d'informations
WO2020143330A1 (fr) Procédé de capture d'image faciale, support d'informations lisible par ordinateur et dispositif terminal
WO2022037541A1 (fr) Procédé et appareil d'entraînement de modèle de traitement d'image, dispositif, et support de stockage
WO2020186887A1 (fr) Procédé, dispositif et appareil de détection de cible pour petites images échantillons continues
WO2020062493A1 (fr) Procédé et appareil de traitement d'image
CN113705462B (zh) 人脸识别方法、装置、电子设备及计算机可读存储介质
WO2022027913A1 (fr) Procédé et appareil de génération de modèles de détection de cibles, dispositif et support de stockage
WO2019119396A1 (fr) Procédé et dispositif de reconnaissance d'expression faciale
WO2022127111A1 (fr) Procédé, appareil et dispositif de reconnaissance faciale intermodale, et support d'enregistrement
CN111860522B (zh) 身份证图片处理方法、装置、终端及存储介质
WO2021135603A1 (fr) Procédé de reconnaissance d'intention, serveur et support de stockage
CN111783626A (zh) 图像识别方法、装置、电子设备及存储介质
WO2020143165A1 (fr) Procédé et système de reconnaissance d'image reproduite, et dispositif terminal
CN111062440A (zh) 一种样本选择方法、装置、设备及存储介质
CN112328822B (zh) 图片预标注方法、装置及终端设备
CN113886443A (zh) 日志的处理方法、装置、计算机设备及存储介质
CN116524206B (zh) 目标图像的识别方法及装置
CN116130088A (zh) 多模态面诊问诊方法、装置及相关设备
CN114913567A (zh) 口罩佩戴的检测方法、装置、终端设备及可读存储介质
CN115359575A (zh) 身份识别方法、装置和计算机设备
CN114864043A (zh) 基于vr设备的认知训练方法、装置及介质
CN111859985B (zh) Ai客服模型测试方法、装置、电子设备及存储介质
CN110287943B (zh) 图像的对象识别方法、装置、电子设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21921813

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21921813

Country of ref document: EP

Kind code of ref document: A1