WO2018113523A1 - Dispositif et procédé de traitement d'images et support d'informations - Google Patents

Dispositif et procédé de traitement d'images et support d'informations Download PDF

Info

Publication number
WO2018113523A1
WO2018113523A1 PCT/CN2017/114856 CN2017114856W WO2018113523A1 WO 2018113523 A1 WO2018113523 A1 WO 2018113523A1 CN 2017114856 W CN2017114856 W CN 2017114856W WO 2018113523 A1 WO2018113523 A1 WO 2018113523A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
face
images
quality evaluation
face images
Prior art date
Application number
PCT/CN2017/114856
Other languages
English (en)
Chinese (zh)
Inventor
张立峰
钟斌
彭程
程冰
范海龙
易建
Original Assignee
深圳云天励飞技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳云天励飞技术有限公司 filed Critical 深圳云天励飞技术有限公司
Publication of WO2018113523A1 publication Critical patent/WO2018113523A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • the present invention relates to the field of computer vision computing, and in particular to an image processing method and apparatus, and a storage medium in a cluster computing system.
  • the screening the M facial images includes:
  • the filtering the M facial images is performed After obtaining N face images, the method further includes:
  • a screening unit configured to filter the M facial images to obtain N facial images, where N is an integer less than or equal to the M;
  • an uploading unit configured to upload the N facial images to the server.
  • An acquisition module configured to acquire angle information of the face image i, wherein the angle information is at least one of: a horizontal rotation angle, a pitch angle, and an inclination, wherein the face image i is in the M face images Any face image;
  • An evaluation module configured to perform image quality evaluation on the K face images, to obtain the K image quality evaluation values
  • a processing unit configured to perform screening enhancement on the N facial images after the screening of the M facial images by the screening unit to obtain N facial images
  • the uploading unit is specifically configured to:
  • FIG. 3 is a schematic structural diagram of a first embodiment of an image processing apparatus according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a screening unit of the image processing apparatus described in FIG. 3 according to an embodiment of the present invention
  • FIG. 5 is still another schematic structural diagram of a screening unit of the image processing apparatus described in FIG. 3 according to an embodiment of the present invention.
  • FIG. 6 is a schematic structural diagram of an uploading unit of the image processing apparatus described in FIG. 3 according to an embodiment of the present disclosure
  • FIG. 7 is still another schematic structural diagram of the image processing apparatus described in FIG. 3 according to an embodiment of the present invention.
  • FIG. 8 is a schematic structural diagram of a second embodiment of an image processing apparatus according to an embodiment of the present invention.
  • references to "an embodiment” herein mean that a particular feature, structure, or characteristic described in connection with the embodiments can be included in at least one embodiment of the invention.
  • the appearances of the phrases in various places in the specification are not necessarily referring to the same embodiments, and are not exclusive or alternative embodiments that are mutually exclusive. Those skilled in the art will understand and implicitly understand that the embodiments described herein can be combined with other embodiments.
  • the image processing apparatus described in the embodiments of the present invention may include a smart phone (such as an Android mobile phone, an iOS mobile phone, a Windows Phone mobile phone, etc.), a tablet computer, a palmtop computer, a notebook computer, a mobile Internet device (MID, Mobile Internet Devices), or a wearable device.
  • a smart phone such as an Android mobile phone, an iOS mobile phone, a Windows Phone mobile phone, etc.
  • a tablet computer such as an Android mobile phone, an iOS mobile phone, a Windows Phone mobile phone, etc.
  • a palmtop computer such as an Apple MacBook Air Traffic Services Inc.
  • MID Mobile Internet Devices
  • the foregoing is by way of example only, and not limitation, including but not limited to the above apparatus.
  • the image processing device is a mobile phone
  • the application software for implementing the embodiment of the present invention may be installed in the mobile phone.
  • the image may be uploaded on the mobile phone.
  • the security personnel captures a certain person, the camera may be photographed.
  • the face image is uploaded to
  • the image processing apparatus in the embodiment of the present invention may be connected to multiple cameras, and each camera may be used to capture video images, and each camera may have a corresponding position mark, or there may be one The number corresponding to it.
  • the camera can be set up in public places, such as schools, museums, crossroads, pedestrian streets, office buildings, garages, airports, hospitals, subway stations, Stations, bus stops, supermarkets, hotels, entertainment venues, etc.
  • the video image can be saved to the memory of the system where the image processing device is located.
  • a plurality of image libraries can be stored in the memory, and each image library can include different video images of the same person.
  • each image library can also be used to store a video image of one area or a video image taken by a specified camera.
  • each frame of the video image captured by the camera corresponds to one attribute information
  • the attribute information is at least one of the following: a shooting time of the video image, a position of the video image, and an attribute parameter of the video image ( Format, size, resolution, etc.), the number of the video image, and the character characteristics in the video image.
  • the character feature attributes in the above video image may include, but are not limited to, the number of people in the video image, the position of the person, the angle of the person, and the like.
  • the picture format of the video image in the embodiment of the present invention may include, but is not limited to, BMP, JPEG, JPEG2000, PNG, etc., and the size may be between 10-30 KB, and each video image may also correspond to one shooting time and shooting.
  • Information such as a camera number of the video image, a link of a panoramic image corresponding to the face image, and the like (a face image and a global image creation feature correspondence relationship file).
  • FIG. 1 is a schematic flowchart diagram of a first embodiment of an image processing method according to an embodiment of the present invention.
  • the image processing method described in this embodiment includes the following steps:
  • the image to be processed can be any image taken by the camera.
  • the image to be processed may also be multiple images.
  • the image to be processed may be an image containing a certain person, and the certain character may be specified by the user.
  • One or more face images may be included in the image to be processed.
  • the image processing device needs to perform face detection on the image to be processed, so that the face image in the image to be processed can be extracted, and when the image to be processed includes multiple faces, A plurality of face images are obtained.
  • face detection can be implemented by at least one of the following face detection algorithms: Several common face detection algorithms are listed below.
  • Boosting classifier can be considered as a feature screening algorithm. Due to its simplicity and generalization ability, it has a very wide application in many fields. Specifically, boosting separates the sample space by character-selecting and enhancing the weight of the misclassified samples.
  • DPM Deformable Part Model face detector
  • DPM divides a rigid or non-rigid object into a number of sub-components, and finally describes the object to be identified by describing each sub-component, and each component and sub-component are characterized by HOG.
  • the response filter of each part is solved by an optimization algorithm. Due to its relatively complex calculations, its application in many fields is limited.
  • ACF (Aggregated Channel Feature) face detector
  • ICF Intelligent Channel Feature
  • ACF is an extension of ICF (Integral Channel Feature), which is equivalent to doing a sub-sampling on the basis of ICF.
  • the advantage of this is that on the one hand, the feature dimension is reduced, On the one hand, it can increase the resistance to deformation.
  • ACF was first used in the field of pedestrian detection, and some people have achieved good results in the field of application and face detection. However, due to its large computational overhead, the features have large redundancy and the improvement space is also large.
  • PICO Panel Intensity Comparison Object Detector
  • PICO is a feature description algorithm based on statistical characteristics. Its feature description is similar to that of Ferns. Due to its simple calculation and strong description ability, it is applied in many Computer vision areas such as object detection, target recognition, target tracking, etc. Recently, some people have applied it to the field of face detection, and the accuracy is relatively general, but the calculation speed is very fast. The reason is that the feature expression is too simple and there is a relatively large room for improvement.
  • N is an integer less than or equal to the M.
  • the user may select N face images from the M face images.
  • the M face images may also be sorted according to the image quality from good to bad, so that the N with better image quality may be preferentially selected.
  • Personal face image may be selected from the M face images.
  • A2) comparing the angle information of the face image i with the preset angle information, and retaining the face image when the angle information of the face image i is successfully compared with the preset angle information i.
  • the angle of the face image captured by the camera is constantly changing, if the user moves faster, it is possible
  • the image captured by the camera is not clear, or only half of the face, or only one silhouette, so at this stage, the image cannot be recognized. Therefore, in the process of uploading an image, the angle of the angle image is required. If the angle is not within a certain range, it is not convenient for subsequent control.
  • the preset angle information is used to standardize the uploaded image, and only the upload corresponding to the preset angle information is used. Images can be uploaded to the server.
  • the preset angle information may include at least one of the following: a horizontal rotation angle range, a pitch angle range, and an inclination range.
  • a horizontal rotation angle is ⁇ 30°
  • the pitch angle is ⁇ 20°
  • the inclination angle is ⁇ 45°.
  • the M facial images are filtered, and the M facial images are filtered to obtain N facial images, which may include the following steps:
  • a continuous shooting mode may be adopted to obtain a plurality of images corresponding to the target preview parameters. How to select the best image quality from multiple images above.
  • the image quality evaluation value may be respectively obtained by using at least one image quality evaluation index to obtain an image quality evaluation value, wherein the image quality evaluation index may include, but is not limited to, average gray level, mean square error, entropy, edge retention, and signal to noise ratio. and many more. It can be defined that the larger the image quality evaluation value obtained, the better the image quality.
  • an image quality evaluation index may be used for evaluation.
  • the image quality evaluation value is processed by entropy processing, and the entropy is larger, indicating that the image quality is higher.
  • the smaller the entropy the worse the image quality.
  • step 104 uploading the N facial images to the server, including:
  • steps 201-203 may refer to the image processing described in FIG. Steps 101 - 103 of the method.
  • the output device 2000 described above may specifically be a display screen.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne un procédé et un dispositif de traitement d'image, et un support d'informations. Le procédé consiste : à acquérir une image à traiter (101, 201) ; à effectuer une détection de visage sur l'image à traiter afin d'obtenir M images faciales, M étant un nombre entier positif (102, 202) ; à effectuer un filtrage sur les M images faciales afin d'obtenir N images faciales, N étant un nombre entier inférieur ou égal à M (103, 203) ; et à télécharger en aval les N images de visage vers un serveur (104). La présente invention réalise un téléchargement en amont intelligent d'images.
PCT/CN2017/114856 2016-12-24 2017-12-06 Dispositif et procédé de traitement d'images et support d'informations WO2018113523A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611210672.0 2016-12-24
CN201611210672.0A CN106778645B (zh) 2016-12-24 2016-12-24 一种图像处理方法及装置

Publications (1)

Publication Number Publication Date
WO2018113523A1 true WO2018113523A1 (fr) 2018-06-28

Family

ID=58920464

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/114856 WO2018113523A1 (fr) 2016-12-24 2017-12-06 Dispositif et procédé de traitement d'images et support d'informations

Country Status (2)

Country Link
CN (1) CN106778645B (fr)
WO (1) WO2018113523A1 (fr)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111488768A (zh) * 2019-01-28 2020-08-04 百度在线网络技术(北京)有限公司 人脸图像的风格转换方法、装置、电子设备及存储介质
CN111914629A (zh) * 2020-06-19 2020-11-10 北京百度网讯科技有限公司 为人脸识别生成训练数据的方法、装置、设备和存储介质
CN111967436A (zh) * 2020-09-02 2020-11-20 北京猿力未来科技有限公司 图像处理方法及装置
CN112269978A (zh) * 2020-10-22 2021-01-26 支付宝(杭州)信息技术有限公司 图像采集方法以及装置
CN112347849A (zh) * 2020-09-29 2021-02-09 咪咕视讯科技有限公司 视频会议处理方法、电子设备及存储介质
CN112785550A (zh) * 2020-12-29 2021-05-11 浙江大华技术股份有限公司 图像质量值确定方法、装置、存储介质及电子装置
CN112860923A (zh) * 2019-11-27 2021-05-28 深圳云天励飞技术有限公司 图像归档方法及相关产品
CN115953327A (zh) * 2023-03-09 2023-04-11 极限人工智能有限公司 一种图像增强方法、系统、可读存储介质及电子设备
CN112785550B (zh) * 2020-12-29 2024-06-04 浙江大华技术股份有限公司 图像质量值确定方法、装置、存储介质及电子装置

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778645B (zh) * 2016-12-24 2018-05-18 深圳云天励飞技术有限公司 一种图像处理方法及装置
CN108875512B (zh) * 2017-12-05 2021-04-23 北京旷视科技有限公司 人脸识别方法、装置、系统、存储介质和电子设备
CN108764149B (zh) * 2018-05-29 2022-02-18 北京中庆现代技术股份有限公司 一种针对班级学生人脸模型的训练方法
CN109101646B (zh) * 2018-08-21 2020-12-18 北京深瞐科技有限公司 数据处理方法、装置、系统及计算机可读介质
CN111199165B (zh) * 2018-10-31 2024-02-06 浙江宇视科技有限公司 图像处理方法及装置
CN111368688A (zh) * 2020-02-28 2020-07-03 深圳市商汤科技有限公司 行人监测方法及相关产品
CN112102623A (zh) * 2020-08-24 2020-12-18 深圳云天励飞技术股份有限公司 交通违章识别方法和装置、智能可穿戴设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090239579A1 (en) * 2008-03-24 2009-09-24 Samsung Electronics Co. Ltd. Mobile device capable of suitably displaying information through recognition of user's face and related method
CN101546377A (zh) * 2009-04-28 2009-09-30 上海银晨智能识别科技有限公司 人脸图像抓取系统及方法
CN102799877A (zh) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 人脸图像筛选方法及系统
CN105138954A (zh) * 2015-07-12 2015-12-09 上海微桥电子科技有限公司 一种图像自动筛选查询识别系统
CN106778645A (zh) * 2016-12-24 2017-05-31 深圳云天励飞技术有限公司 一种图像处理方法及装置

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105243638B (zh) * 2015-09-25 2019-11-15 腾讯科技(深圳)有限公司 一种上传图像的方法和装置
CN105654043B (zh) * 2015-12-24 2019-02-12 Oppo广东移动通信有限公司 控制方法、控制装置及拍照系统
CN106127106A (zh) * 2016-06-13 2016-11-16 东软集团股份有限公司 视频中目标人物查找方法和装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090239579A1 (en) * 2008-03-24 2009-09-24 Samsung Electronics Co. Ltd. Mobile device capable of suitably displaying information through recognition of user's face and related method
CN101546377A (zh) * 2009-04-28 2009-09-30 上海银晨智能识别科技有限公司 人脸图像抓取系统及方法
CN102799877A (zh) * 2012-09-11 2012-11-28 上海中原电子技术工程有限公司 人脸图像筛选方法及系统
CN105138954A (zh) * 2015-07-12 2015-12-09 上海微桥电子科技有限公司 一种图像自动筛选查询识别系统
CN106778645A (zh) * 2016-12-24 2017-05-31 深圳云天励飞技术有限公司 一种图像处理方法及装置

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111488768A (zh) * 2019-01-28 2020-08-04 百度在线网络技术(北京)有限公司 人脸图像的风格转换方法、装置、电子设备及存储介质
CN111488768B (zh) * 2019-01-28 2023-09-05 百度在线网络技术(北京)有限公司 人脸图像的风格转换方法、装置、电子设备及存储介质
CN112860923A (zh) * 2019-11-27 2021-05-28 深圳云天励飞技术有限公司 图像归档方法及相关产品
CN112860923B (zh) * 2019-11-27 2024-03-22 深圳云天励飞技术有限公司 图像归档方法及相关产品
CN111914629A (zh) * 2020-06-19 2020-11-10 北京百度网讯科技有限公司 为人脸识别生成训练数据的方法、装置、设备和存储介质
CN111967436A (zh) * 2020-09-02 2020-11-20 北京猿力未来科技有限公司 图像处理方法及装置
CN111967436B (zh) * 2020-09-02 2024-03-19 北京猿力未来科技有限公司 图像处理方法及装置
CN112347849A (zh) * 2020-09-29 2021-02-09 咪咕视讯科技有限公司 视频会议处理方法、电子设备及存储介质
CN112347849B (zh) * 2020-09-29 2024-03-26 咪咕视讯科技有限公司 视频会议处理方法、电子设备及存储介质
CN112269978A (zh) * 2020-10-22 2021-01-26 支付宝(杭州)信息技术有限公司 图像采集方法以及装置
CN112269978B (zh) * 2020-10-22 2022-11-15 蚂蚁胜信(上海)信息技术有限公司 图像采集方法以及装置
CN112785550A (zh) * 2020-12-29 2021-05-11 浙江大华技术股份有限公司 图像质量值确定方法、装置、存储介质及电子装置
CN112785550B (zh) * 2020-12-29 2024-06-04 浙江大华技术股份有限公司 图像质量值确定方法、装置、存储介质及电子装置
CN115953327A (zh) * 2023-03-09 2023-04-11 极限人工智能有限公司 一种图像增强方法、系统、可读存储介质及电子设备
CN115953327B (zh) * 2023-03-09 2023-09-12 极限人工智能有限公司 一种图像增强方法、系统、可读存储介质及电子设备

Also Published As

Publication number Publication date
CN106778645A (zh) 2017-05-31
CN106778645B (zh) 2018-05-18

Similar Documents

Publication Publication Date Title
WO2018113523A1 (fr) Dispositif et procédé de traitement d'images et support d'informations
WO2018210047A1 (fr) Procédé de traitement de données, appareil de traitement de données, dispositif électronique et support de stockage
CN109154976B (zh) 通过机器学习训练对象分类器的系统和方法
WO2019218824A1 (fr) Procédé d'acquisition de piste de mouvement et dispositif associé, support de stockage et terminal
CN109766779B (zh) 徘徊人员识别方法及相关产品
US8538141B2 (en) Classifier learning image production program, method, and system
CN109740444B (zh) 人流量信息展示方法及相关产品
Othman et al. A new IoT combined body detection of people by using computer vision for security application
CN109815843B (zh) 图像处理方法及相关产品
WO2020094091A1 (fr) Procédé de capture d'image, caméra de surveillance et système de surveillance
Avgerinakis et al. Recognition of activities of daily living for smart home environments
US20140314271A1 (en) Systems and Methods for Pedestrian Detection in Images
JP2013196682A (ja) 人の集団検出方法、及び人の集団検出装置
TW202026948A (zh) 活體檢測方法、裝置以及儲存介質
JP3970877B2 (ja) 追跡装置および追跡方法
CN109815839B (zh) 微服务架构下的徘徊人员识别方法及相关产品
US10373015B2 (en) System and method of detecting moving objects
WO2020233000A1 (fr) Procédé et appareil de reconnaissance faciale et support de stockage lisible par ordinateur
CN107103299B (zh) 一种监控视频中的人数统计方法
WO2021047492A1 (fr) Procédé de suivi de cible, dispositif, et système informatique
JP4999794B2 (ja) 静止領域検出方法とその装置、プログラム及び記録媒体
CN115002414A (zh) 监测方法、装置及服务器和计算机可读存储介质
WO2018210039A1 (fr) Procédé de traitement de données, dispositif de traitement de données, dispositif informatique et support de stockage
WO2018113206A1 (fr) Terminal et procédé de traitement d'image
Usha Rani et al. Real-time human detection for intelligent video surveillance: an empirical research and in-depth review of its applications

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17884793

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17884793

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 12/09/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17884793

Country of ref document: EP

Kind code of ref document: A1