WO2018210047A1 - 数据处理方法、数据处理装置、电子设备及存储介质 - Google Patents

数据处理方法、数据处理装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2018210047A1
WO2018210047A1 PCT/CN2018/079370 CN2018079370W WO2018210047A1 WO 2018210047 A1 WO2018210047 A1 WO 2018210047A1 CN 2018079370 W CN2018079370 W CN 2018079370W WO 2018210047 A1 WO2018210047 A1 WO 2018210047A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
parameter set
face image
stored
feature points
Prior art date
Application number
PCT/CN2018/079370
Other languages
English (en)
French (fr)
Inventor
魏运运
彭程
石小华
李兰
Original Assignee
深圳云天励飞技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳云天励飞技术有限公司 filed Critical 深圳云天励飞技术有限公司
Publication of WO2018210047A1 publication Critical patent/WO2018210047A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/51Indexing; Data structures therefor; Storage structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • the present invention relates to the field of video surveillance technologies, and in particular, to a data processing method, a data processing device, an electronic device, and a storage medium.
  • Embodiments of the present invention provide a data processing method, a data processing device, an electronic device, and a storage medium, which can implement fast warehousing processing for a target.
  • a first aspect of the embodiments of the present invention provides a data processing method, including:
  • the performing feature extraction on the face image to obtain a feature parameter set includes:
  • the feature points are extracted from the face image, and the feature points obtained after the feature point extraction are also filtered, and the feature points with better robustness can be obtained, and the selected feature points are selected.
  • the direction and position of the feature points are used as feature parameters.
  • only the number of feature points is used to mark the face image, which can accurately mark the face image through the feature points, and from the contour point of view
  • the contour image is extracted from the face image, and the feature contour is obtained as the feature parameter.
  • the feature parameters obtained from the two dimensions can be combined into the feature parameter set, and the face image is marked by the feature point and the contour direction.
  • the feature parameter set thus obtained can better reflect the face image, which is beneficial to improve the accuracy of face matching in subsequent applications.
  • the performing, by the screening, the P feature points, the Q feature points including:
  • the Q feature points are obtained.
  • a central feature point may be selected from the feature points to be selected, and the central feature point is centered, and the feature points within the preset radius range are often stable because the central feature point is in the image.
  • the change is also gradual. Therefore, the feature points around the central feature point are also relatively stable. In this way, the feature points can be quickly filtered.
  • the feature extraction is performed on the face image
  • the method further includes:
  • the storage operation is performed by the camera, there may be a case that the face image captured by the camera may already exist in the registration library, and therefore, it is necessary to further search in the registration library. If no search result is obtained, the above face image is not in the registration library, and the storage operation can be performed to avoid the same object from being repeatedly stored in the library.
  • the method further includes:
  • the foregoing embodiment can continue to update and improve the warehousing information after the warehousing object storage operation is completed, because the warehousing is a fast implementation process, and the information may be imperfect when the warehousing is completed.
  • the warehousing operation can be performed first, and when the camera captures more information, the warehousing information of the object to be stored is improved, and the warehousing information in the registration library can be dynamically updated.
  • a second aspect of the embodiments of the present invention provides a data processing apparatus, including:
  • a first acquiring unit configured to acquire, by using a camera, identity information of the object to be stored and a face image
  • An extracting unit configured to perform feature extraction on the face image to obtain a feature parameter set
  • a processing unit configured to perform a warehousing operation according to the identity information and the feature parameter set, so that the to-be-stored object becomes a member of a pre-stored registration database.
  • the extracting unit includes:
  • a first extraction module configured to perform feature point extraction on the face image to obtain P feature points, where P is an integer greater than 1;
  • a screening module configured to filter the P feature points to obtain Q feature points, where the Q is an integer smaller than the P and greater than 1.
  • a first determining module configured to use a direction and a position of each of the Q feature points as feature parameters to obtain the Q feature parameters
  • a second extraction module configured to perform contour extraction on the face image, obtain K feature contours, and use the K feature contours as feature parameters to obtain the K feature parameters;
  • a second determining module configured to synthesize the Q feature parameters and the K feature parameters into the feature parameter set.
  • the screening module is specifically configured to:
  • a central feature point for determining the P feature points, and selecting, from the P feature points, feature points centered on the central feature point and within a preset radius, to obtain the Q feature points .
  • the device further includes:
  • a searching unit configured to perform feature extraction on the face image by the extracting unit to obtain a feature parameter set, perform a search in the registration library according to the feature parameter set, and search for no
  • the processing unit performs a step of performing a warehousing operation according to the identity information and the feature parameter set.
  • the device further includes:
  • a second acquiring unit configured to acquire update information of the to-be-stored object after the processing unit performs a storage operation according to the identity information and the feature parameter set;
  • an update unit configured to update the inbound information of the to-be-stored object according to the update information.
  • a third aspect of the embodiments of the present invention provides an electronic device, where the electronic device includes a processor, and the processor is configured to implement the data processing method provided by the first aspect when the computer program stored in the memory is executed.
  • a fourth aspect of the embodiments of the present invention provides a computer readable storage medium storing a computer program, the computer program being executed by a processor to implement the method of any of the first aspect or the first aspect .
  • the data processing device can obtain the identity information of the object to be stored and the face image through the camera, perform feature extraction on the face image, obtain a feature parameter set, and perform the feature parameter and the feature parameter set according to the identity information and the feature parameter set.
  • the warehousing operation to realize that the object to be warehousing becomes a member of the pre-stored registration library. Therefore, the camera image of the object to be stored and the identity information can be obtained by using the camera, and the feature image of the face image is further extracted, and the feature parameter set of the object to be stored is obtained, and the warehousing operation is further performed, since the object is not treated. Interrogation, but directly through the camera, can improve storage efficiency.
  • a suspicious object is found on the monitoring platform, and since the suspicious object cannot be stopped at the scene, the suspicious object can be stored in the above-mentioned embodiment of the present invention, and the worker is notified to stop the behavior of the suspicious object. Or afterwards.
  • FIG. 1 is a schematic flow chart of a first embodiment of a data processing method according to an embodiment of the present invention
  • FIG. 2 is a schematic flow chart of a second embodiment of a data processing method according to an embodiment of the present invention.
  • FIG. 3 is a schematic structural diagram of a first embodiment of a data processing apparatus according to an embodiment of the present invention.
  • FIG. 3b is a schematic structural diagram of an extracting unit of the data processing apparatus described in FIG. 3a according to an embodiment of the present disclosure
  • FIG. 3c is still another schematic structural diagram of the data processing apparatus described in FIG. 3a according to an embodiment of the present disclosure
  • FIG. 3 is a schematic diagram of still another structure of the data processing apparatus described in FIG. 3a according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a second embodiment of a data processing apparatus according to an embodiment of the present invention.
  • references to "an embodiment” herein mean that a particular feature, structure, or characteristic described in connection with the embodiments can be included in at least one embodiment of the invention.
  • the appearances of the phrases in various places in the specification are not necessarily referring to the same embodiments, and are not exclusive or alternative embodiments that are mutually exclusive. Those skilled in the art will understand and implicitly understand that the embodiments described herein can be combined with other embodiments.
  • the data processing apparatus described in the embodiments of the present invention may include a smart phone (such as an Android mobile phone, an iOS mobile phone, a Windows Phone mobile phone, etc.), a tablet computer, a palmtop computer, a notebook computer, a mobile Internet device (MID, Mobile Internet Devices), or a wearable device.
  • a smart phone such as an Android mobile phone, an iOS mobile phone, a Windows Phone mobile phone, etc.
  • a tablet computer such as an Android mobile phone, an iOS mobile phone, a Windows Phone mobile phone, etc.
  • a palmtop computer such as a notebook computer
  • MID mobile Internet device
  • the above data processing device may also be a server.
  • the data processing apparatus in the embodiment of the present invention may be connected to multiple cameras, and each camera may be used to capture video images, and each camera may have a corresponding position mark, or there may be one The number corresponding to it.
  • cameras can be placed in public places, such as schools, museums, crossroads, pedestrian streets, office buildings, garages, airports, hospitals, subway stations, stations, bus stops, supermarkets, hotels, entertainment venues, and more.
  • the video image can be saved to the memory of the system where the data processing device is located.
  • a plurality of image libraries can be stored in the memory, and each image library can include different video images of the same person.
  • each image library can also be used to store a video image of one area or a video image taken by a specified camera.
  • each frame of the video image captured by the camera corresponds to one attribute information
  • the attribute information is at least one of the following: a shooting time of the video image, a position of the video image, and an attribute parameter of the video image ( Format, size, resolution, etc.), the number of the video image, and the character characteristics in the video image.
  • the character feature attributes in the above video image may include, but are not limited to, the number of people in the video image, the position of the person, the angle of the person, and the like.
  • the video image captured by each camera is usually a dynamic face image. Therefore, in the embodiment of the present invention, the angle of the face image may be analyzed, and the angle may include, but is not limited to, a horizontal rotation angle and a pitch. Angle or inclination.
  • the definition of dynamic face image data requires that the distance between the two eyes is not less than 30 pixels, and it is recommended to be more than 60 pixels.
  • the horizontal rotation angle does not exceed ⁇ 30°
  • the pitch angle does not exceed ⁇ 20°
  • the inclination angle does not exceed ⁇ 45°. It is recommended that the horizontal rotation angle does not exceed ⁇ 15°, the pitch angle does not exceed ⁇ 10°, and the inclination angle does not exceed ⁇ 15°.
  • the picture format of the video image in the embodiment of the present invention may include, but is not limited to, BMP, JPEG, JPEG2000, PNG, etc., and the size may be between 10-30 KB, and each video image may also correspond to one shooting time and shooting.
  • Information such as a camera number of the video image, a link of a panoramic image corresponding to the face image, and the like (a face image and a global image creation feature correspondence relationship file).
  • FIG. 1 is a schematic flowchart diagram of a first embodiment of a data processing method according to an embodiment of the present invention.
  • the data processing method described in this embodiment includes the following steps:
  • the identity information of the object to be stored may include, but is not limited to, an ID card number, a height, a weight, a home address, a mobile phone number, a bank card number, a social account number, a job, and the like.
  • the face image of the object to be stored can be obtained by the camera, and the overall image of the object to be stored can be analyzed by the camera to obtain the height and age of the object to be stored. Further, the face image may be sent to other auxiliary systems (such as a public security system, a banking system, a social security system, etc.), and the face image is identified by another system, and the object to be stored may be further obtained.
  • Identity information such as: weight, home address, mobile number, bank card number, social account number, occupation, etc.
  • step 101 when performing step 101, the following steps may be performed:
  • the first image in the above steps may be an image to be stored in the library, not only a face image, but also other images captured by the camera, such as a back image, a side face image, and the like.
  • the first image may be from a video, and the video may be segmented to obtain one frame and one frame image, and each frame image is identified to obtain a series of images related to the object to be stored, of course, the one
  • the series image not only contains the image of the face of the object to be stored, but also the image of the object to be stored but does not include the face of the object, for example, the side of the object to be stored, the back of the object to be stored.
  • the target tracking algorithm may be used to process the video to obtain a series of images related to the object to be stored, and then each frame image related to the object to be stored may be analyzed, for example, each frame image is performed.
  • Image quality evaluation selecting images with better image quality, and further analyzing the angle of the face in each frame of the image with better image quality. Because the angle is different, the information of the corresponding face image is different. Normally, the front view angle is the best, but if the camera captures, the closer the face image is to the front view angle, the better the image of the best face angle is selected from these images. For example, when the object to be stored is in motion, it is difficult to capture the face image.
  • multiple images can be captured, images with good image quality are selected from these images, and angles are selected from images with good image quality.
  • the best face image so that a suitable face image can be selected as the inbound image of the object to be stored, which improves the recognition accuracy of the object to be stored, and no doubt, if one is selected If the image with unclear face image or poor angle is used as the inbound image, then the probability of misidentification is higher in subsequent use.
  • the M first image is used for analysis, and the height, the body shape, the face shape, and the like of the object to be stored are obtained as the identity information of the object to be stored.
  • the embodiment of the present invention is used in a monitoring system such as a shopping mall or a supermarket.
  • the face image of the suspicious object can be obtained through the camera, and the corresponding identity information is obtained through the camera, and then, timely
  • the suspicious object is warehousing. In this way, even if the specific identity information of an object is not known, it can be processed into the database, so that the system can record the suspicious object, and the next time the monitoring system is monitored, the staff can be notified to perform the timely operation. attention.
  • the image quality evaluation is performed on the M first images, and the image quality evaluation may be performed on the image by using at least one image quality evaluation index to obtain an image quality evaluation value, wherein the image quality is obtained.
  • Evaluation indicators may include, but are not limited to, average gray scale, mean square error, entropy, edge retention, signal to noise ratio, and the like. It can be defined that the larger the image quality evaluation value obtained, the better the image quality.
  • Image quality can be evaluated by using 2 to 10 image quality evaluation indicators. Specifically, the number of image quality evaluation indicators and which indicator are selected are determined according to specific implementation conditions. Of course, it is also necessary to select image quality evaluation indicators in combination with specific scenes, and the image quality indicators in the dark environment and the image quality evaluation in the bright environment may be different.
  • an image quality evaluation index may be used for evaluation.
  • the image quality evaluation value is processed by entropy processing, and the entropy is larger, indicating that the image quality is higher.
  • the smaller the entropy the worse the image quality.
  • the image to be evaluated may be evaluated by using multiple image quality evaluation indicators, and the image quality evaluation may be performed when the image quality evaluation index is used for image quality evaluation.
  • the weight of each image quality evaluation index in the plurality of image quality evaluation indicators may obtain a plurality of image quality evaluation values, and the final image quality evaluation value may be obtained according to the plurality of image quality evaluation values and corresponding weights, for example, three
  • the image quality evaluation indicators are: A index, B index and C index.
  • the weight of A is a1
  • the weight of B is a2
  • the weight of C is a3.
  • the image quality evaluation value corresponding to A is b1
  • the image quality evaluation value corresponding to B is b2
  • the image quality evaluation value corresponding to C is b3.
  • the final image quality evaluation value a1b1+a2b2+a3b3.
  • the larger the image quality evaluation value the better the image quality.
  • the data processing device may perform feature point extraction or feature contour extraction on the face image as a feature parameter set.
  • the method of feature extraction may include, but is not limited to, a Harris corner detection algorithm, a Scale Invariant Feature Transform (SIFT) extraction algorithm, and a feature extraction using a classifier, which may include but is not limited to: a support vector machine ( Support Vector Machine (SVM), convolutional neural networks, cascaded neural networks, genetic algorithms, and more.
  • SVM Support Vector Machine
  • the enhancement process may include at least one of the following: smoothing, grayscale stretching, and histogram equalization, thereby enhancing the face. Based on the quality of the image, the feature image is extracted from the enhanced face image to obtain the feature parameter set. At this time, more features can be extracted.
  • performing feature extraction on the face image to obtain a feature parameter set may include the following steps:
  • the data processing device may first perform feature point extraction on the face image to obtain P feature points, P is an integer greater than 1, and the P feature points include feature points extracted from the face image by a preset feature point extraction algorithm.
  • the preset feature point extraction algorithms may include, but are not limited to, a Harris corner detection algorithm, a Scale-invariant feature transform (SIFT) algorithm, and the like. Since these feature points are not necessarily robust, they need to be filtered, that is, P feature points are filtered to obtain Q feature points, and Q is an integer smaller than P and greater than 1, mainly filtering out some features.
  • the inconspicuous feature points take the direction and position of each of the Q feature points as feature parameters, and the purpose is to enhance the marking effect on the face image, because the position represents the coordinates of the feature point in the face image.
  • Position, and the direction reflects its indication direction at the position, which not only enriches the characteristics of the feature points, but also enhances the difficulty of feature point recognition in the face recognition process, and can improve the recognition accuracy of the face image, therefore,
  • the direction and position of the feature points can better reflect the features of the face image, so that Q feature parameters can be obtained.
  • the contour image is extracted from the face image to obtain K feature contours, and the K contours are preset contours.
  • the extraction algorithm performs all contours obtained by contour extraction.
  • the preset contour extraction algorithm may include, but is not limited to, Hough transform and Haar operator detection. , Canny operator detection algorithm, etc.
  • the K feature contours can be used as feature parameters to obtain K feature parameters, and the Q feature parameters and the K feature parameters are combined into a feature parameter set.
  • two dimensions can be adopted.
  • the face image is processed to improve the face anti-counterfeiting precision. Because the two dimensions can deeply mark the features of the face image, in the face recognition process, not only the feature point matching but also the feature contour is needed. match.
  • screening the P feature points to obtain Q feature points may include the following steps:
  • the preset radius range can be set by the user or the system defaults.
  • the data processing method can map the P feature points into the coordinate system, and use the geometric method to determine the central feature points of the P feature points, and further, the P features can be obtained from the P features. Select the feature points with the center feature point as the center and preset the radius to obtain Q feature points.
  • the central feature point is not necessarily one of the P feature points, or may be the geometric center of the P feature points, or a feature point close to the geometric center, so that the feature in the face image may be determined to be significant. Feature points to improve the accuracy and accuracy of face recognition.
  • the data processing device may store the identity information and the feature parameter set as the registration information of the object to be stored in the database, and generate a code identifier for the object to be stored. After the storage is successful, the object to be stored becomes A member of the pre-stored registry. According to the above manner, the warehousing operation of different objects can be completed, which is convenient and fast.
  • the data processing device can obtain the identity information of the object to be stored and the face image through the camera, perform feature extraction on the face image, obtain a feature parameter set, and perform the feature parameter and the feature parameter set according to the identity information and the feature parameter set.
  • the warehousing operation to realize that the object to be warehousing becomes a member of the pre-stored registration library. Therefore, the camera image and the identity information of the object to be stored can be obtained by using the camera, and the feature image of the face image is further extracted, and the feature parameter set of the object to be stored is obtained, and the warehousing operation is further performed, since the object is not treated. Interrogation, but directly through the camera, can improve storage efficiency.
  • a suspicious object is found on the monitoring platform, and since the suspicious object cannot be stopped at the scene, the suspicious object can be stored in the above-mentioned embodiment of the present invention, and the worker is notified to stop the behavior of the suspicious object. Or afterwards.
  • FIG. 2 it is a schematic flowchart of a second embodiment of a data processing method according to an embodiment of the present invention.
  • the data processing method described in this embodiment includes the following steps:
  • the search may be performed in the registration database according to the feature parameter set, so as to avoid repeated registration, so when searching for the matched feature parameter set, The registration is stopped, and when no results are found, step 204 is performed.
  • the data of the to-be-stored object does not exist in the registration library, so that the storage operation can be performed according to the identity information and the feature parameter set, so that the object to be stored is pre-stored.
  • Register a member of the library.
  • the update information may be a mobile phone number (for example, a new mobile phone number), a bank card number (for example, a new bank card number), and a home address (for example, an address of a new home). and many more.
  • the data processing device can be associated with other systems, which can be public security systems, banking systems, social security systems, carrier systems, and the like. In this way, the data processing apparatus can acquire the preset time interval or acquire the update information of the object to be stored every time other system information is updated.
  • the data processing device can receive the update information input by the user, and further update the corresponding warehousing information of the original warehousing object by using the update information to achieve the purpose of updating. For example, in a supermarket, a suspicious object is found on the monitoring platform. Since the suspicious object cannot be stopped at the scene, after the suspicious object is stopped, the suspicious object can be interrogated by the staff member to obtain more identity information of the suspicious object. Through the identity information, the warehousing information in the original system is improved.
  • the data processing device can obtain the identity information of the object to be stored and the face image, perform feature extraction on the face image, obtain a feature parameter set, and search in the registration database according to the feature parameter set.
  • the warehousing operation is performed according to the identity information and the feature parameter set, so that the object to be warehousing becomes a member of the pre-stored registration library, and after that, the object to be warehousing can also be obtained.
  • the update information is updated according to the update information to the inbound information of the inbound object.
  • the warehousing operation is performed according to the identity information of the object to be stored and the feature parameter set of the face image, and After obtaining more identity information of the object to be stored, the information of the inbound object is updated in time, thereby improving the efficiency of storage and management efficiency.
  • FIG. 3 is a schematic structural diagram of a first embodiment of a data processing apparatus according to an embodiment of the present invention.
  • the data processing apparatus described in this embodiment includes: a first obtaining unit 301, an extracting 302, and a processing unit 303, as follows:
  • the first obtaining unit 301 is configured to acquire, by using a camera, identity information of the object to be stored and a face image;
  • the extracting unit 302 is configured to perform feature extraction on the face image to obtain a feature parameter set.
  • the processing unit 303 is configured to perform a warehousing operation according to the identity information and the feature parameter set, so that the to-be-stored object becomes a member of a pre-stored registration database.
  • the first obtaining unit 301 may include: an image acquiring module (not shown), an image quality evaluating module (not shown), an image selecting module (not shown), and an identity information determining module. (not shown in the figure), as follows:
  • An image acquisition module configured to acquire, by the camera, M first images of the object to be stored, wherein the M is an integer greater than 1;
  • An image quality evaluation module configured to perform image quality evaluation on the M first images, to obtain the M image quality evaluation values
  • An image selection module configured to select an image quality evaluation value that is greater than a preset quality threshold from the M image quality evaluation values, obtain the N image quality evaluation values, and obtain a corresponding first image thereof Obtaining the N first images, where N is an integer greater than 1 and less than the M;
  • the image selection module is further configured to select, as the face image, a first image of an optimal face angle from the N first images;
  • the identity information determining module is configured to determine identity information of the to-be-stored object according to the M first images.
  • FIG. 3b is a specific refinement structure of the extracting unit 302 of the data processing apparatus described in FIG. 3a, where the extracting unit includes: a first extracting module 3021, a screening module 3022, and a first determining module 3023.
  • the second extraction module 3024 and the second determination module 3025 are as follows:
  • a first extraction module 3021 configured to perform feature point extraction on the face image to obtain P feature points, where P is an integer greater than one;
  • the screening module 3022 is configured to filter the P feature points to obtain Q feature points, where the Q is an integer smaller than the P and greater than 1.
  • a first determining module 3023 configured to use a direction and a position of each of the Q feature points as a feature parameter to obtain the Q feature parameters
  • a second extraction module 3024 configured to perform contour extraction on the face image, obtain K feature contours, and use the K feature contours as feature parameters to obtain the K feature parameters;
  • the second determining module 3025 is configured to synthesize the Q feature parameters and the K feature parameters into the feature parameter set.
  • the screening module 3022 is specifically configured to:
  • a central feature point for determining the P feature points, and selecting, from the P feature points, feature points centered on the central feature point and within a preset radius, to obtain the Q feature points .
  • FIG. 3c is a further modified structure of the data processing apparatus described in FIG. 3a.
  • the search unit 304 may also be included, as follows:
  • the searching unit 304 is configured to perform feature extraction on the face image by the extracting unit 302, obtain a feature parameter set, perform a search in the registration library according to the feature parameter set, and search in the search unit.
  • the step of performing a warehousing operation based on the identity information and the feature parameter set is performed by the processing unit 303 to any matching result.
  • FIG. 3d is a further modified structure of the data processing apparatus described in FIG. 3a.
  • the second obtaining unit 305 and the updating unit 306 may also be included, as follows:
  • the second obtaining unit 305 is configured to acquire, after the processing unit 303 performs the warehousing operation according to the identity information and the feature parameter set, update information of the to-be-stored object;
  • the updating unit 306 is configured to update the inbound information of the to-be-stored object according to the update information.
  • the data processing device can obtain the identity information of the object to be stored and the face image through the camera, perform feature extraction on the face image, and obtain a feature parameter set according to the feature parameter set in the registration database.
  • the search is performed, and when no matching result is found, the warehousing operation is performed according to the identity information and the feature parameter set, so that the object to be warehousing becomes a member of the pre-stored registration library. Therefore, the camera image and the identity information of the object to be stored can be obtained by using the camera, and the feature image of the face image is further extracted, and the feature parameter set of the object to be stored is obtained, and the warehousing operation is further performed, since the object is not treated.
  • Interrogation can improve storage efficiency.
  • a suspicious object is found on the monitoring platform, and since the suspicious object cannot be stopped at the scene, the suspicious object can be stored in the above-mentioned embodiment of the present invention, and the worker is notified to stop the behavior of the suspicious object. Or afterwards.
  • FIG. 4 it is a schematic structural diagram of a second embodiment of a data processing apparatus according to an embodiment of the present invention.
  • the data processing apparatus described in this embodiment includes: at least one input device 1000; at least one output device 2000; at least one processor 3000, such as a CPU; and a memory 4000, the input device 1000, the output device 2000, and the processor 3000. And the memory 4000 is connected through the bus 5000.
  • the input device 1000 may be a touch panel, a physical button, or a mouse.
  • the output device 2000 described above may specifically be a display screen.
  • the above memory 4000 may be a high speed RAM memory or a non-volatile memory such as a magnetic disk memory.
  • the above memory 4000 is used to store a set of program codes, and the input device 1000, the output device 2000, and the processor 3000 are used to call the program code stored in the memory 4000, and perform the following operations:
  • the processor 3000 is configured to:
  • the processor 3000 performs feature extraction on the face image to obtain a feature parameter set, including:
  • the processor 3000 performs screening on the P feature points to obtain Q feature points, including:
  • the Q feature points are obtained.
  • the processor 3000 after performing feature extraction on the face image to obtain a feature parameter set, and before performing the warehousing operation according to the identity information and the feature parameter set, further Used for:
  • the processor 3000 is further configured to: after the performing the warehousing operation according to the identity information and the feature parameter set,
  • An embodiment of the present invention further provides an electronic device, where the electronic device includes a processor, and the processor implements the foregoing data processing method when executing a computer program stored in a memory.
  • the embodiment of the present invention further provides a computer storage medium, wherein the computer storage medium can store a program, and the program includes some or all of the steps of any one of the data processing methods described in the foregoing method embodiments.
  • embodiments of the present invention can be provided as a method, apparatus (device), or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • the computer program is stored/distributed in a suitable medium, provided with other hardware or as part of the hardware, or in other distributed forms, such as over the Internet or other wired or wireless telecommunication systems.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Abstract

本发明提供了一种数据处理方法,包括:通过摄像头获取待入库对象的身份信息以及人脸图像;对所述人脸图像进行特征提取,得到特征参数集;根据所述身份信息和所述特征参数集进行入库操作,以实现所述待入库对象成为预先存储的注册库中的一员。通过本发明实施例可实现对目标进行快速入库处理。本发明还提供了一种数据处理装置、电子设备及存储介质。

Description

数据处理方法、数据处理装置、电子设备及存储介质
本申请要求于2017年5月18日提交中国专利局,申请号为201710351946.6、发明名称为“数据处理方法、数据处理装置及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及视频监控技术领域,具体涉及一种数据处理方法、数据处理装置、电子设备及存储介质。
背景技术
随着经济、社会、文化的快速发展,越来越多外来人口流向城市,使得城市人口增加。城市人口的增加在加快城市化进程的同时,也给城市管理带来更大的挑战,虽然,视频监控技术为城市安全提供了技术支持,且目前来看,包括多个摄像头的监控系统已经在城市中应用来对一个区域进行监控,但是由于摄像头的数目较多,且每个摄像头的功能较为独立,通常情况下,建立入库则需要管理员寻找到目标,对目标进行盘查,以获取该目标的入库信息,入库过程相当繁琐。因此,如何通过摄像头实现对目标进行快速入库(即将该目标录入到监控系统)的方式亟待解决。
发明内容
本发明实施例提供了一种数据处理方法、数据处理装置、电子设备及存储介质,可实现对目标进行快速入库处理。
本发明实施例第一方面提供了一种数据处理方法,包括:
通过摄像头获取待入库对象的身份信息以及人脸图像;
对所述人脸图像进行特征提取,得到特征参数集;
根据所述身份信息和所述特征参数集进行入库操作,以实现所述待入库对象成为预先存储的注册库中的一员。
结合本发明实施例第一方面,在第一方面的第一种可能实施方式中,所述对所述人脸图像进行特征提取,得到特征参数集,包括:
对所述人脸图像进行特征点提取,得到P个特征点,所述P为大于1的整数;
对所述P个特征点进行筛选,得到Q个特征点,所述Q为小于所述P且大于1的整数;
将所述Q个特征点中每一特征点的方向以及位置作为特征参数,得到所述Q个特征参数;
对所述人脸图像进行轮廓提取,得到K个特征轮廓,并将所述K个特征轮廓作为特征参数,得到所述K个特征参数;
将所述Q个特征参数以及所述K个特征参数合成所述特征参数集。
如此,上述本实施例,从特征点维度出发,对人脸图像进行特征点提取,还对特征点提取后得到的特征点进行筛选,可得到鲁棒性较好的特征点,并选取筛选后的特征点的方向以及位置作为特征参数,相较于现有技术中仅仅依靠特征点个数对人脸图像进行标记,该方式可以准确通过特征点标记人脸图像,另外,又从轮廓角度出发,对人脸图像进行轮廓提取,得到特征轮廓,将其当作特征参数,可将从两个维度得到的特征参数合成特征参数集,通过特征点以及轮廓两个方向对人脸图像进行标记,这样得到的特征参数集更能反映出人脸图像,有利于在后续应用中提高人脸匹配的精度。
结合本发明实施例第一方面的第一种可能实施方式,在第一方面的第二种可能实施方式中,所述对所述P个特征点进行筛选,得到Q个特征点,包括:
确定所述P个特征点的中心特征点;
从所述P个特征点中选取以所述中心特征点为圆心,且处于预设半径范围内的特征点,得到所述Q个特征点。
如此,上述本实施例,可以从待筛选的特征点中选取一个中心特征点,以该中心特征点为圆心,预设半径范围内的特征点,由于中心特征点往往较为稳定,另外,图像中的变化也是渐变的,因而,该中心特征点周围的特征点也较为稳定,通过该方式,可快速实现对特征点进行筛选。
结合本发明实施例第一方面或第一方面的第一种或第二种可能实施方式,在第一方面的第三种可能实施方式中,在所述对所述人脸图像进行特征提取, 得到特征参数集之后,以及所述根据所述身份信息和所述特征参数集进行入库操作之前,所述方法还包括:
根据所述特征参数集在所述注册库中进行搜索,在未搜索到任何匹配结果时,执行所述根据所述身份信息和所述特征参数集进行入库操作的步骤。
如此,上述本实施例中,由于通过摄像头进行入库操作,会存在一种情况,即摄像头拍摄到的人脸图像可能已经存在于注册库中,因而,需要进一步在注册库中进行搜索,若未得到任何搜索结果,则说明上述人脸图像不在注册库中,可对其进行入库操作,以避免同一对象重复入库。
结合本发明实施例第一方面或第一方面的第一种或第二种可能实施方式,在第一方面的第四种可能实施方式中,在所述根据所述身份信息和所述特征参数集进行入库操作之后,所述方法还包括:
获取所述待入库对象的更新信息;
根据所述更新信息对所述待入库对象的入库信息进行更新。
如此,上述本实施例,可在待入库对象入库操作完成后,还可以继续对其入库信息进行更新和完善,因为入库是个快速实现过程,有可能入库时候信息不完善,那么,可先进行入库操作,再在摄像头捕捉到更多信息时,再完善该待入库对象的入库信息,可实现动态更新注册库中的入库信息。
本发明实施例第二方面提供了一种数据处理装置,包括:
第一获取单元,用于通过摄像头获取待入库对象的身份信息以及人脸图像;
提取单元,用于对所述人脸图像进行特征提取,得到特征参数集;
处理单元,用于根据所述身份信息和所述特征参数集进行入库操作,以实现所述待入库对象成为预先存储的注册库中的一员。
结合本发明实施例第二方面,在第二方面的第一种可能实施方式中,所述提取单元包括:
第一提取模块,用于对所述人脸图像进行特征点提取,得到P个特征点,所述P为大于1的整数;
筛选模块,用于对所述P个特征点进行筛选,得到Q个特征点,所述Q为小于所述P且大于1的整数;
第一确定模块,用于将所述Q个特征点中每一特征点的方向以及位置作为特征参数,得到所述Q个特征参数;
第二提取模块,用于对所述人脸图像进行轮廓提取,得到K个特征轮廓,并将所述K个特征轮廓作为特征参数,得到所述K个特征参数;
第二确定模块,用于将所述Q个特征参数以及所述K个特征参数合成所述特征参数集。
结合本发明实施例第二方面的第一种可能实施方式,在第二方面的第二种可能实施方式中,所述筛选模块具体用于:
用于确定所述P个特征点的中心特征点,从所述P个特征点中选取以所述中心特征点为圆心,且处于预设半径范围内的特征点,得到所述Q个特征点。
结合本发明实施例第二方面或第二方面的第一种或第二种可能实施方式,在第二方面的第三种可能实施方式中,所述装置还包括:
搜索单元,用于在所述提取单元对所述人脸图像进行特征提取,得到特征参数集之后,根据所述特征参数集在所述注册库中进行搜索,在所述搜索单元未搜索到任何匹配结果时,由所述处理单元执行根据所述身份信息和所述特征参数集进行入库操作的步骤。
结合本发明实施例第二方面或第二方面的第一种或第二种可能实施方式,在第二方面的第四种可能实施方式中,所述装置还包括:
第二获取单元,用于在所述处理单元根据所述身份信息和所述特征参数集进行入库操作之后,获取所述待入库对象的更新信息;
更新单元,用于根据所述更新信息对所述待入库对象的入库信息进行更新。
本发明实施例第三方面提供了一种电子设备,所述电子设备包括处理器,所述处理器用于执行存储器中存储的计算机程序时实现上述第一方面提供的数据处理方法。
本发明实施例第四方面提供了一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行以实现如第一方面或第一方面的任一可能实施方式所述的方法。
实施本发明实施例,具有如下有益效果:
可以看出,通过本发明实施例,数据处理装置可通过摄像头获取待入库对象的身份信息以及人脸图像,对人脸图像进行特征提取,得到特征参数集,根据身份信息和特征参数集进行入库操作,以实现待入库对象成为预先存储的注册库中的一员。从而,可利用摄像头获取待入库对象的人脸图像以及身份信息, 进一步对其人脸图像进行特征提取,得到待入库对象的特征参数集,进一步进行入库操作,由于不用对待入库对象进行盘问,而是直接通过摄像头获取,可提高入库效率。例如,在超市中,在监控平台发现可疑对象,由于不在现场无法制止可疑对象,进而,可通过上述本发明实施例对该可疑对象进行入库操作,并通知工作人员对可疑对象的行为进行制止或者事后处理。
附图说明
为了更清楚地说明本发明实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本发明实施例提供的一种数据处理方法的第一实施例流程示意图;
图2是本发明实施例提供的一种数据处理方法的第二实施例流程示意图;
图3a是本发明实施例提供的一种数据处理装置的第一实施例结构示意图;
图3b是本发明实施例提供的图3a所描述的数据处理装置的提取单元的结构示意图;
图3c是本发明实施例提供的图3a所描述的数据处理装置的又一结构示意图;
图3d是本发明实施例提供的图3a所描述的数据处理装置的又一结构示意图;
图4是本发明实施例提供的一种数据处理装置的第二实施例结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明的说明书和权利要求书及所述附图中的术语“第一”、“第二”、“第三”和“第四”等是用于区别不同对象,而不是用于描述特定顺序。此外,术语“包括”和“具有”以及它们任何变形,意图在于覆盖不排他的包含。例如 包含了一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其它步骤或单元。
在本文中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本发明的至少一个实施例中。在说明书中的各个位置展示该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本文所描述的实施例可以与其它实施例相结合。
本发明实施例所描述数据处理装置可以包括智能手机(如Android手机、iOS手机、Windows Phone手机等)、平板电脑、掌上电脑、笔记本电脑、移动互联网设备(MID,Mobile Internet Devices)或穿戴式设备等,上述仅是举例,而非穷举,包含但不限于上述装置,当然,上述数据处理装置还可以为服务器。
需要说明的是,本发明实施例中的数据处理装置可与多个摄像头连接,每一摄像头均可用于抓拍视频图像,每一摄像头均可有一个与之对应的位置标记,或者,可有一个与之对应的编号。通常情况下,摄像头可设置在公共场所,例如,学校、博物馆、十字路口、步行街、写字楼、车库、机场、医院、地铁站、车站、公交站台、超市、酒店、娱乐场所等等。摄像头在拍摄到视频图像后,可将该视频图像保存到数据处理装置所在系统的存储器。存储器中可存储有多个图像库,每一图像库可包含同一人的不同视频图像,当然,每一图像库还可以用于存储一个区域的视频图像或者某个指定摄像头拍摄的视频图像。
进一步可选地,本发明实施例中,摄像头拍摄的每一帧视频图像均对应一个属性信息,属性信息为以下至少一种:视频图像的拍摄时间、视频图像的位置、视频图像的属性参数(格式、大小、分辨率等)、视频图像的编号和视频图像中的人物特征属性。上述视频图像中的人物特征属性可包括但不仅限于:视频图像中的人物个数、人物位置、人物角度等等。
进一步需要说明的是,每一摄像头采集的视频图像通常为动态人脸图像,因而,本发明实施例中可以对人脸图像的角度进行分析,上述角度可包括但不仅限于:水平转动角度、俯仰角或者倾斜度。例如,可定义动态人脸图像数据要求两眼间距不小于30像素,建议60像素以上。水平转动角度不超过±30°、俯仰角不超过±20°、倾斜角不超过±45°。建议水平转动角度不超过±15°、 俯仰角不超过±10°、倾斜角不超过±15°。例如,还可对人脸图像是否被其他物体遮挡进行筛选,通常情况下,饰物不应遮挡脸部主要区域,饰物如深色墨镜、口罩和夸张首饰等,当然,也有可能摄像头上面布满灰尘,导致人脸图像被遮挡。本发明实施例中的视频图像的图片格式可包括但不仅限于:BMP,JPEG,JPEG2000,PNG等等,其大小可以在10-30KB之间,每一视频图像还可以对应一个拍摄时间、以及拍摄该视频图像的摄像头统一编号、与人脸图像对应的全景大图的链接等信息(人脸图像和全局图片建立特点对应性关系文件)。
请参阅图1,为本发明实施例提供的一种数据处理方法的第一实施例流程示意图。本实施例中所描述的数据处理方法,包括以下步骤:
101、通过摄像头获取待入库对象的身份信息以及人脸图像。
其中,待入库对象的身份信息可包括但不仅限于:身份证号码、身高、体重、家庭住址、手机号、银行卡号、社交账号、职业等等。在发现待入库对象时,可通过摄像头获取该待入库对象的人脸图像,还可以通过摄像头对该待入库对象的整体图像进行分析,以得到该待入库对象的身高、年龄。进一步地,还可以将该人脸图像发送给其他辅助系统(如:公安系统、银行系统、社保系统等等),由其他系统对该人脸图像进行身份识别,可以进一步获取该待入库对象的身份信息,如:体重、家庭住址、手机号、银行卡号、社交账号、职业等等。
可选地,在执行步骤101时,可按照如下步骤执行:
11)、通过摄像头获取所述待入库对象的M张第一图像,所述M为大于1的整数;
12)、对所述M张第一图像进行图像质量评价,得到所述M个图像质量评价值;
13)、从所述M个图像质量评价值中选取图像质量评价值大于预设质量阈值的图像质量评价值,得到所述N个图像质量评价值,并获取其对应的第一图像,得到所述N张第一图像,所述N为大于1且小于所述M的整数;
14)、从所述N张第一图像中选取最佳人脸角度的第一图像作为所述人脸图像;
15)、根据所述M张第一图像确定所述待入库对象的身份信息。
其中,上述步骤中的第一图像可为待入库相关的图像,不仅仅指人脸图像,还可以是摄像头摄取的其他图像,如:背影图像,侧脸图像等等。上述第一图像可来自于一段视频,可对该一段视频进行分割处理,得到一帧一帧图像,对每一帧图像进行识别,得到与待入库对象相关的一系列图像,当然,该一系列图像中不仅包含待入库对象人脸的图像,还可以包含待入库对象本人但不包含其人脸的图像,例如,待入库对象的侧身,待入库对象的背影。实现中,可先采用目标跟踪算法对视频进行处理,得到对待入库对象相关的一系列图像,然后,可对该待入库对象相关的每一帧图像进行分析,如:对每帧图像进行图像质量评价,选取图像质量较好的图像,进一步地,分析图像质量较好的图像中的每一帧图像中人脸的角度,由于角度不一样,其对应的人脸图像的信息也不一样,通常情况下,正视角度最佳,但摄像头捕捉的话,往往是越接近正视角度的人脸图像越佳,再从这些图像中选取最佳人脸角度的图像。例如,待入库对象在运动过程中,较难捕捉到其人脸图像,因而,可捕捉多张其图像,从这些图像中选取图像质量好的图像,再从图像质量好的图像中选取角度最佳的人脸图像,如此,可选取一张合适的人脸图像作为待入库对象的入库图像,提高了对该待入库对象的辨识准确度,毫无疑问,若选择一张人脸图像不清晰或者角度不好的图像作为入库图像,那么,其在后续使用中,则误识别几率较高。
上述步骤15中,可利用M张第一图像进行分析,得到该待入库对象的身高、体型、脸型等等,将其作为该待入库对象的身份信息。例如,本发明实施例用于商场或超市等监控系统中,在发现某个可疑对象时,可通过摄像头获取该可疑对象的人脸图像,以及通过摄像头获取其相应的身份信息,进而,可及时对该可疑对象进行入库操作。如此,即使不知道某个对象的具体身份信息,也可以对其进行入库处理,以便于系统记录可疑对象,在其下次进行该监控系统的监控范围内,可通知工作人员及时对其进行关注。
其中,上述步骤12中,对所述M张第一图像进行图像质量评价,可采用如下方式:可采用至少一个图像质量评价指标对图像进行图像质量评价,得到图像质量评价值,其中,图像质量评价指标可包括但不仅限于:平均灰度、均方差、熵、边缘保持度、信噪比等等。可定义为得到的图像质量评价值越大,则图像质量越好。
需要说明的是,由于采用单一评价指标对图像质量进行评价时,具有一定的局限性,因此,可采用多个图像质量评价指标对图像质量进行评价,当然,对图像质量进行评价时,并非图像质量评价指标越多越好,因为图像质量评价指标越多,图像质量评价过程的计算复杂度越高,也不见得图像质量评价效果越好,因此,在对图像质量评价要求较高的情况下,可采用2~10个图像质量评价指标对图像质量进行评价。具体地,选取图像质量评价指标的个数及哪个指标,依据具体实现情况而定。当然,也得结合具体地场景选取图像质量评价指标,在暗环境下进行图像质量评价和亮环境下进行图像质量评价选取的图像质量指标可不一样。
可选地,在对图像质量评价精度要求不高的情况下,可用一个图像质量评价指标进行评价,例如,以熵对待处理图像进行图像质量评价值,可认为熵越大,则说明图像质量越好,相反地,熵越小,则说明图像质量越差。
可选地,在对图像质量评价精度要求较高的情况下,可以采用多个图像质量评价指标对待评价图像进行评价,在多个图像质量评价指标对待评价图像进行图像质量评价时,可设置该多个图像质量评价指标中每一图像质量评价指标的权重,可得到多个图像质量评价值,根据该多个图像质量评价值及其对应的权重可得到最终的图像质量评价值,例如,三个图像质量评价指标分别为:A指标、B指标和C指标,A的权重为a1,B的权重为a2,C的权重为a3,采用A、B和C对某一图像进行图像质量评价时,A对应的图像质量评价值为b1,B对应的图像质量评价值为b2,C对应的图像质量评价值为b3,那么,最后的图像质量评价值=a1b1+a2b2+a3b3。通常情况下,图像质量评价值越大,说明图像质量越好。
102、对所述人脸图像进行特征提取,得到特征参数集。
其中,上述数据处理装置可对人脸图像进行特征点提取或者特征轮廓提取,将其作为特征参数集。特征提取的方式可包括但不仅限于:Harris角点检测算法、尺度不变特征(Scale Invariant Feature Transform,SIFT)提取算法、采用分类器进行特征提取,分类器可包括但不仅限于:支持向量机(Support Vector Machine,SVM)、卷积神经网络、级联神经网络、遗传算法等等。当然,在人脸图像不清晰的情况下,也可以对人脸图像进行增强处理,增强处理可包括以下至少一项:平滑处理、灰度拉伸、直方图均衡化,如此,可提升人脸图像的 质量,在此基础上,再对增强后的人脸图像进行特征提取,得到特征参数集,此时,可提取更多的特征。
可选地,上述步骤102中,对所述人脸图像进行特征提取,得到特征参数集,可包括如下步骤:
21)、对所述人脸图像进行特征点提取,得到P个特征点,所述P为大于1的整数;
22)、对所述P个特征点进行筛选,得到Q个特征点,所述Q为小于所述P且大于1的整数;
23)、将所述Q个特征点中每一特征点的方向以及位置作为特征参数,得到所述Q个特征参数;
24)、对所述人脸图像进行轮廓提取,得到K个特征轮廓,并将所述K个特征轮廓作为特征参数,得到所述K个特征参数;
25)、将所述P个特征参数以及所述K个特征参数合成所述特征参数集。
其中,数据处理装置可先对人脸图像进行特征点提取,得到P个特征点,P为大于1的整数,上述P个特征点包括经过预先设置的特征点提取算法对人脸图像进行特征点提取得到的全部特征点,预先设置的特征点提取算法可包括但不仅限于:Harris角点检测算法、尺度不变特征转换算法(Scale-invariant feature transform,SIFT)算法,等等。由于这些特征点鲁棒性不一定好,因而,需要对其进行筛选,即对P个特征点进行筛选,得到Q个特征点,Q为小于P且大于1的整数,主要是过滤掉一些特征不明显的特征点,将Q个特征点中每一特征点的方向以及位置作为特征参数,其目的在于,增强对人脸图像的标记作用,因为位置代表了特征点在人脸图像中的坐标位置,而方向则反映了其在该位置的指示方向,不仅丰富了特征点的特性,还可增强在人脸识别过程中的特征点识别难度,可提升人脸图像的识别精度,因此,采用特征点的方向以及位置更能体现人脸图像的特征,从而,可得到Q个特征参数,其次,对人脸图像进行轮廓提取,得到K个特征轮廓,该K个轮廓为采用预先设置的轮廓提取算法进行轮廓提取得到的全部轮廓,上述预先设置的轮廓提取算法可包括但不仅限于:Hough变换、Haar算子检测算法、Canny算子检测算法等等,可将该K个特征轮廓作为特征参数,得到K个特征参数,将Q个特征参数以及所述K个特征参数合成特征参数集,如此,可采用两个维度对人脸图像进行处理,有 利用提高人脸防伪精度,因为采用两个维度可深层次的对人脸图像的特征进行标记,在人脸识别过程中,不仅需要特征点匹配,而且需要特征轮廓匹配。可选地,上述步骤22中,对所述P个特征点进行筛选,得到Q个特征点,可包括如下步骤:
221)、确定所述P个特征点的中心特征点;
222)、从所述P个特征点中选取以所述中心特征点为圆心,且处于预设半径范围内的特征点,得到所述Q个特征点。
其中,预设半径范围可由用户自行设置或者系统默认,数据处理方法可将P个特征点映射到坐标系中,利用几何方法确定该P个特征点的中心特征点,进而,可从P个特征点中选取以该中心特征点为圆心,预设半径范围内的特征点,得到Q个特征点。当然,上述中心特征点不一定是P个特征点中的一个,也有可能是P个特征点的几何中心,或者,靠近该几何中心的某个特征点,如此,可确定人脸图像中特征显著的特征点,以提升人脸识别的精度和准确度。
103、根据所述身份信息和所述特征参数集进行入库操作,以实现所述待入库对象成为预先存储的注册库中的一员。
其中,数据处理装置可将身份信息和特征参数集作为待入库对象的注册信息保存在数据库中,还可以为待入库对象生成一个代码标识,入库成功后,该待入库对象就成为预先存储的注册库中的一员。按照上述方式,可完成对不同对象的入库操作,方便、快捷。
可以看出,通过本发明实施例,数据处理装置可通过摄像头获取待入库对象的身份信息以及人脸图像,对人脸图像进行特征提取,得到特征参数集,根据身份信息和特征参数集进行入库操作,以实现待入库对象成为预先存储的注册库中的一员。从而,可利用摄像头获取待入库对象的人脸图像以及身份信息,进一步对其人脸图像进行特征提取,得到待入库对象的特征参数集,进一步进行入库操作,由于不用对待入库对象进行盘问,而是直接通过摄像头获取,可提高入库效率。例如,在超市中,在监控平台发现可疑对象,由于不在现场无法制止可疑对象,进而,可通过上述本发明实施例对该可疑对象进行入库操作,并通知工作人员对可疑对象的行为进行制止或者事后处理。
与上述一致地,请参阅图2,为本发明实施例提供的一种数据处理方法的 第二实施例流程示意图。本实施例中所描述的数据处理方法,包括以下步骤:
201、通过摄像头获取待入库对象的身份信息以及人脸图像。
202、对所述人脸图像进行特征提取,得到特征参数集。
203、根据所述特征参数集在预先存储的注册库中进行搜索。
其中,数据处理装置在根据人脸图像确定特征参数集之后,可根据该特征参数集在注册库中进行搜索,其目的在于,以免出现重复注册,如此,在搜索到匹配的特征参数集,则停止注册,在未搜索到任何结果时,执行步骤204。
204、在未搜索到任何匹配结果时,根据所述身份信息和所述特征参数集进行入库操作,以实现所述待入库对象成为所述注册库中的一员。
其中,在未搜索到任何匹配结果,说明注册库中不存在该待入库对象的资料,从而,可根据该身份信息以及特征参数集进行入库操作,以实现待入库对象成为预先存储的注册库中的一员。
205、获取所述待入库对象的更新信息。
其中,在上述步骤204之后,上述更新信息可为手机号(例如,换了新的手机号)、银行卡号(例如,办理了新的银行卡号)、家庭住址(例如,搬了新家的地址)等等。数据处理装置可与其他系统进行关联,该其他系统可以是公安系统、银行系统、社保系统、运营商系统等等。如此,数据处理装置可获取预设时间间隔或者在每次其他系统信息更新时获取待入库对象的更新信息。
206、根据所述更新信息对所述待入库对象的入库信息进行更新。
其中,数据处理装置可接收用户输入的更新信息,进而,利用更新信息更新对应的原本待入库对象的入库信息,以达到更新的目的。例如,在超市中,在监控平台发现可疑对象,由于不在现场无法制止可疑对象,在制止了可疑对象之后,则可由工作人员对该可疑对象进行盘问,获得该可疑对象更多的身份信息,从而,通过该身份信息对原本系统中的入库信息进行完善。
可以看出,通过本发明实施例,数据处理装置可获取待入库对象的身份信息以及人脸图像,对人脸图像进行特征提取,得到特征参数集,根据特征参数集在注册库中进行搜索,在未搜索到任何匹配结果时,根据身份信息和特征参数集进行入库操作,以实现待入库对象成为预先存储的注册库中的一员,在此之后,还可以获取待入库对象的更新信息,根据更新信息对待入库对象的入库信息进行更新。从而,不仅可通过人脸图像的特征参数集进行校验待入库对象 是否被注册,进而,根据待入库对象的身份信息以及人脸图像的的特征参数集进行入库操作,还可以在获取到了该待入库对象更多的身份信息之后,及时对待入库对象的信息进行更新,提高了入库效率以及管理效率。
与上述一致地,以下为实施上述数据处理方法的装置,具体如下:
请参阅图3a,为本发明实施例提供的一种数据处理装置的第一实施例结构示意图。本实施例中所描述的数据处理装置,包括:第一获取单元301、提取302和处理单元303,具体如下:
第一获取单元301,用于通过摄像头获取待入库对象的身份信息以及人脸图像;
提取单元302,用于对所述人脸图像进行特征提取,得到特征参数集;
处理单元303,用于根据所述身份信息和所述特征参数集进行入库操作,以实现所述待入库对象成为预先存储的注册库中的一员。
可选地,第一获取单元301可包含:图像获取模块(图中未标出)、图像质量评价模块(图中未标出)、图像选取模块(图中未标出)和身份信息确定模块(图中未标出),具体如下:
图像获取模块,用于通过摄像头获取所述待入库对象的M张第一图像,所述M为大于1的整数;
图像质量评价模块,用于对所述M张第一图像进行图像质量评价,得到所述M个图像质量评价值;
图像选取模块,用于从所述M个图像质量评价值中选取图像质量评价值大于预设质量阈值的图像质量评价值,得到所述N个图像质量评价值,并获取其对应的第一图像,得到所述N张第一图像,所述N为大于1且小于所述M的整数;
所述图像选取模块,还用于从所述N张第一图像中选取最佳人脸角度的第一图像作为所述人脸图像;
身份信息确定模块,用于根据所述M张第一图像确定所述待入库对象的身份信息。
可选地,如图3b,图3b为图3a所描述的数据处理装置的提取单元302的具体细化结构,所述提取单元包括:第一提取模块3021、筛选模块3022、第一 确定模块3023、第二提取模块3024和第二确定模块3025,具体如下:
第一提取模块3021,用于对所述人脸图像进行特征点提取,得到P个特征点,所述P为大于1的整数;
筛选模块3022,用于对所述P个特征点进行筛选,得到Q个特征点,所述Q为小于所述P且大于1的整数;
第一确定模块3023,用于将所述Q个特征点中每一特征点的方向以及位置作为特征参数,得到所述Q个特征参数;
第二提取模块3024,用于对所述人脸图像进行轮廓提取,得到K个特征轮廓,并将所述K个特征轮廓作为特征参数,得到所述K个特征参数;
第二确定模块3025,用于将所述Q个特征参数以及所述K个特征参数合成所述特征参数集。
可选地,所述筛选模块3022具体用于:
用于确定所述P个特征点的中心特征点,从所述P个特征点中选取以所述中心特征点为圆心,且处于预设半径范围内的特征点,得到所述Q个特征点。
可选地,如图3c,图3c为图3a所描述的数据处理装置的又一变型结构,其与图3a相比较,还可以包括搜索单元304,具体如下:
搜索单元304,用于在所述提取单元302对所述人脸图像进行特征提取,得到特征参数集之后,根据所述特征参数集在所述注册库中进行搜索,在所述搜索单元未搜索到任何匹配结果时,由所述处理单元303执行根据所述身份信息和所述特征参数集进行入库操作的步骤。
可选地,如图3d,图3d为图3a所描述的数据处理装置的又一变型结构,其与图3a相比较,还可以包括第二获取单元305和更新单元306,具体如下:
第二获取单元305,用于在所述处理单元303根据所述身份信息和所述特征参数集进行入库操作之后,获取所述待入库对象的更新信息;
更新单元306,用于根据所述更新信息对所述待入库对象的入库信息进行更新。
可以看出,通过本发明实施例,数据处理装置可通过摄像头获取待入库对象的身份信息以及人脸图像,对人脸图像进行特征提取,得到特征参数集,根据特征参数集在注册库中进行搜索,在未搜索到任何匹配结果时,根据身份信息和特征参数集进行入库操作,以实现待入库对象成为预先存储的注册库中的 一员。从而,可利用摄像头获取待入库对象的人脸图像以及身份信息,进一步对其人脸图像进行特征提取,得到待入库对象的特征参数集,进一步进行入库操作,由于不用对待入库对象进行盘问,而是直接通过摄像头获取,可提高入库效率。例如,在超市中,在监控平台发现可疑对象,由于不在现场无法制止可疑对象,进而,可通过上述本发明实施例对该可疑对象进行入库操作,并通知工作人员对可疑对象的行为进行制止或者事后处理。
与上述一致地,请参阅图4,为本发明实施例提供的一种数据处理装置的第二实施例结构示意图。本实施例中所描述的数据处理装置,包括:至少一个输入设备1000;至少一个输出设备2000;至少一个处理器3000,例如CPU;和存储器4000,上述输入设备1000、输出设备2000、处理器3000和存储器4000通过总线5000连接。
其中,上述输入设备1000具体可为触控面板、物理按键或者鼠标。
上述输出设备2000具体可为显示屏。
上述存储器4000可以是高速RAM存储器,也可为非易失存储器(non-volatile memory),例如磁盘存储器。上述存储器4000用于存储一组程序代码,上述输入设备1000、输出设备2000和处理器3000用于调用存储器4000中存储的程序代码,执行如下操作:
上述处理器3000,用于:
通过摄像头获取待入库对象的身份信息以及人脸图像;
对所述人脸图像进行特征提取,得到特征参数集;
根据所述身份信息和所述特征参数集进行入库操作,以实现所述待入库对象成为预先存储的注册库中的一员。
可选地,上述处理器3000,对所述人脸图像进行特征提取,得到特征参数集,包括:
对所述人脸图像进行特征点提取,得到P个特征点,所述P为大于1的整数;
对所述P个特征点进行筛选,得到Q个特征点,所述Q为小于所述P且大于1的整数;
将所述Q个特征点中每一特征点的方向以及位置作为特征参数,得到所述 Q个特征参数;
对所述人脸图像进行轮廓提取,得到K个特征轮廓,并将所述K个特征轮廓作为特征参数,得到所述K个特征参数;
将所述Q个特征参数以及所述K个特征参数合成所述特征参数集。
可选地,上述处理器3000,对所述P个特征点进行筛选,得到Q个特征点,包括:
确定所述P个特征点的中心特征点;
从所述P个特征点中选取以所述中心特征点为圆心,且处于预设半径范围内的特征点,得到所述Q个特征点。
可选地,上述处理器3000,在所述对所述人脸图像进行特征提取,得到特征参数集之后,以及所述根据所述身份信息和所述特征参数集进行入库操作之前,还具体用于:
根据所述特征参数集在所述注册库中进行搜索,在未搜索到任何匹配结果时,执行所述根据所述身份信息和所述特征参数集进行入库操作的步骤。
可选地,上述处理器3000,在所述根据所述身份信息和所述特征参数集进行入库操作之后,还具体用于:
获取所述待入库对象的更新信息;
根据所述更新信息对所述待入库对象的入库信息进行更新。
本发明实施例还提供一种电子设备,所述电子设备包括处理器,所述处理器用于执行存储器中存储的计算机程序时实现上述的数据处理方法。
本发明实施例还提供一种计算机存储介质,其中,该计算机存储介质可存储有程序,该程序执行时包括上述方法实施例中记载的任何一种数据处理方法的部分或全部步骤。
尽管在此结合各实施例对本发明进行了描述,然而,在实施所要求保护的本发明过程中,本领域技术人员通过查看所述附图、公开内容、以及所附权利要求书,可理解并实现所述公开实施例的其他变化。在权利要求中,“包括”(comprising)一词不排除其他组成部分或步骤,“一”或“一个”不排除多个的情况。单个处理器或其他单元可以实现权利要求中列举的若干项功能。相互不同的从属权利要求中记载了某些措施,但这并不表示这些措施不能组合起来产生良好的效果。
本领域技术人员应明白,本发明的实施例可提供为方法、装置(设备)、或计算机程序产品。因此,本发明可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。计算机程序存储/分布在合适的介质中,与其它硬件一起提供或作为硬件的一部分,也可以采用其他分布形式,如通过Internet或其它有线或无线电信系统。
本发明是参照本发明实施例的方法、装置(设备)和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
尽管结合具体特征及其实施例对本发明进行了描述,显而易见的,在不脱离本发明的精神和范围的情况下,可对其进行各种修改和组合。相应地,本说明书和附图仅仅是所附权利要求所界定的本发明的示例性说明,且视为已覆盖本发明范围内的任意和所有修改、变化、组合或等同物。显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。

Claims (11)

  1. 一种数据处理方法,其特征在于,包括:
    通过摄像头获取待入库对象的身份信息以及人脸图像;
    对所述人脸图像进行特征提取,得到特征参数集;
    根据所述身份信息和所述特征参数集进行入库操作,以实现所述待入库对象成为预先存储的注册库中的一员。
  2. 根据权利要求1所述的方法,其特征在于,所述对所述人脸图像进行特征提取,得到特征参数集,包括:
    对所述人脸图像进行特征点提取,得到P个特征点,所述P为大于1的整数;
    对所述P个特征点进行筛选,得到Q个特征点,所述Q为小于所述P且大于1的整数;
    将所述Q个特征点中每一特征点的方向以及位置作为特征参数,得到所述Q个特征参数;
    对所述人脸图像进行轮廓提取,得到K个特征轮廓,并将所述K个特征轮廓作为特征参数,得到所述K个特征参数;
    将所述Q个特征参数以及所述K个特征参数合成所述特征参数集。
  3. 根据权利要求2所述的方法,其特征在于,所述对所述P个特征点进行筛选,得到Q个特征点,包括:
    确定所述P个特征点的中心特征点;
    从所述P个特征点中选取以所述中心特征点为圆心,且处于预设半径范围内的特征点,得到所述Q个特征点。
  4. 根据权利要求1至3任一项所述的方法,其特征在于,在所述对所述人脸图像进行特征提取,得到特征参数集之后,以及所述根据所述身份信息和所述特征参数集进行入库操作之前,所述方法还包括:
    根据所述特征参数集在所述注册库中进行搜索,在未搜索到任何匹配结果时,执行所述根据所述身份信息和所述特征参数集进行入库操作的步骤。
  5. 根据权利要求1至3任一项所述的方法,其特征在于,在所述根据所述身份信息和所述特征参数集进行入库操作之后,所述方法还包括:
    获取所述待入库对象的更新信息;
    根据所述更新信息对所述待入库对象的入库信息进行更新。
  6. 一种数据处理装置,其特征在于,包括:
    第一获取单元,用于通过摄像头获取待入库对象的身份信息以及人脸图像;
    提取单元,用于对所述人脸图像进行特征提取,得到特征参数集;
    处理单元,用于根据所述身份信息和所述特征参数集进行入库操作,以实现所述待入库对象成为预先存储的注册库中的一员。
  7. 根据权利要求6所述的装置,其特征在于,所述提取单元包括:
    第一提取模块,用于对所述人脸图像进行特征点提取,得到P个特征点,所述P为大于1的整数;
    筛选模块,用于对所述P个特征点进行筛选,得到Q个特征点,所述Q为小于所述P且大于1的整数;
    第一确定模块,用于将所述Q个特征点中每一特征点的方向以及位置作为特征参数,得到所述Q个特征参数;
    第二提取模块,用于对所述人脸图像进行轮廓提取,得到K个特征轮廓,并将所述K个特征轮廓作为特征参数,得到所述K个特征参数;
    第二确定模块,用于将所述Q个特征参数以及所述K个特征参数合成所述特征参数集。
  8. 根据权利要求7所述的装置,其特征在于,所述筛选模块具体用于:
    用于确定所述P个特征点的中心特征点,从所述P个特征点中选取以所述中心特征点为圆心,且处于预设半径范围内的特征点,得到所述Q个特征点。
  9. 根据权利要求6至8任一项所述的装置,其特征在于,所述装置还包括:
    搜索单元,用于在所述提取单元对所述人脸图像进行特征提取,得到特征参数集之后,根据所述特征参数集在所述注册库中进行搜索,在所述搜索单元未搜索到任何匹配结果时,由所述处理单元执行根据所述身份信息和所述特征 参数集进行入库操作的步骤。
  10. 一种电子设备,其特征在于,所述电子设备包括处理器,所述处理器用于执行存储器中存储的计算机程序时实现如权利要求1-5中任意一项所述的数据处理方法。
  11. 一种计算机可读存储介质,存储有计算机程序,所述计算机程序被处理器执行以实现如权利要求1-5任一项所述的数据处理方法。
PCT/CN2018/079370 2017-05-18 2018-03-16 数据处理方法、数据处理装置、电子设备及存储介质 WO2018210047A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710351946.6A CN107169458B (zh) 2017-05-18 2017-05-18 数据处理方法、装置及存储介质
CN201710351946.6 2017-05-18

Publications (1)

Publication Number Publication Date
WO2018210047A1 true WO2018210047A1 (zh) 2018-11-22

Family

ID=59816193

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/079370 WO2018210047A1 (zh) 2017-05-18 2018-03-16 数据处理方法、数据处理装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN107169458B (zh)
WO (1) WO2018210047A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109840885A (zh) * 2018-12-27 2019-06-04 深圳云天励飞技术有限公司 图像融合方法及相关产品

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169458B (zh) * 2017-05-18 2018-04-06 深圳云天励飞技术有限公司 数据处理方法、装置及存储介质
CN108416902B (zh) * 2018-02-28 2021-11-26 成都好享你网络科技有限公司 基于差异识别的实时物体识别方法和装置
CN108805800A (zh) * 2018-04-24 2018-11-13 北京嘀嘀无限科技发展有限公司 图片处理方法、装置及存储介质
CN108733819B (zh) * 2018-05-22 2021-07-06 深圳云天励飞技术有限公司 一种人员档案建立方法和装置
CN108921097B (zh) * 2018-07-03 2022-08-23 深圳市未来感知科技有限公司 人眼视角检测方法、装置及计算机可读存储介质
CN109784274B (zh) * 2018-12-29 2021-09-14 杭州励飞软件技术有限公司 识别尾随的方法及相关产品
CN109754461A (zh) * 2018-12-29 2019-05-14 深圳云天励飞技术有限公司 图像处理方法及相关产品
CN109685040B (zh) * 2019-01-15 2021-06-29 广州唯品会研究院有限公司 形体数据的测量方法、装置以及计算机可读存储介质
CN113792662A (zh) * 2021-09-15 2021-12-14 北京市商汤科技开发有限公司 图像检测方法、装置、电子设备以及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040199775A1 (en) * 2001-05-09 2004-10-07 Wee Ser Method and device for computer-based processing a template minutia set of a fingerprint and a computer readable storage medium
CN102004908A (zh) * 2010-11-30 2011-04-06 汉王科技股份有限公司 一种自适应的人脸识别方法及装置
CN103942705A (zh) * 2014-03-25 2014-07-23 惠州Tcl移动通信有限公司 一种基于人脸识别的广告分类匹配推送方法及系统
CN107169458A (zh) * 2017-05-18 2017-09-15 深圳云天励飞技术有限公司 数据处理方法、装置及存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4557184B2 (ja) * 2008-04-23 2010-10-06 村田機械株式会社 画像処理装置、画像読取装置及び画像処理プログラム
CN101493891B (zh) * 2009-02-27 2011-08-31 天津大学 基于sift的具有镜面翻转不变性的特征提取和描述方法
CN101661618A (zh) * 2009-06-05 2010-03-03 天津大学 具有翻转不变性的图像特征提取和描述方法
CN101770613A (zh) * 2010-01-19 2010-07-07 北京智慧眼科技发展有限公司 基于人脸识别和活体检测的社保身份认证方法
CN102236675B (zh) * 2010-04-30 2013-11-06 华为技术有限公司 图像特征点匹配对处理、图像检索方法及设备
CN104599286B (zh) * 2013-10-31 2018-11-16 展讯通信(天津)有限公司 一种基于光流的特征跟踪方法及装置
CN104077596A (zh) * 2014-06-18 2014-10-01 河海大学 一种无标志物跟踪注册方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040199775A1 (en) * 2001-05-09 2004-10-07 Wee Ser Method and device for computer-based processing a template minutia set of a fingerprint and a computer readable storage medium
CN102004908A (zh) * 2010-11-30 2011-04-06 汉王科技股份有限公司 一种自适应的人脸识别方法及装置
CN103942705A (zh) * 2014-03-25 2014-07-23 惠州Tcl移动通信有限公司 一种基于人脸识别的广告分类匹配推送方法及系统
CN107169458A (zh) * 2017-05-18 2017-09-15 深圳云天励飞技术有限公司 数据处理方法、装置及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YANG, QIUJU ET AL.: "Improved SIFT Algorithm Based on Canny Feature Points", COMPUTER ENGINEERING AND DESIGN, vol. 32, no. 7, 16 July 2011 (2011-07-16), pages 2428 - 2430 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109840885A (zh) * 2018-12-27 2019-06-04 深圳云天励飞技术有限公司 图像融合方法及相关产品
CN109840885B (zh) * 2018-12-27 2023-03-14 深圳云天励飞技术有限公司 图像融合方法及相关产品

Also Published As

Publication number Publication date
CN107169458A (zh) 2017-09-15
CN107169458B (zh) 2018-04-06

Similar Documents

Publication Publication Date Title
WO2018210047A1 (zh) 数据处理方法、数据处理装置、电子设备及存储介质
CN109961009B (zh) 基于深度学习的行人检测方法、系统、装置及存储介质
WO2019218824A1 (zh) 一种移动轨迹获取方法及其设备、存储介质、终端
CN109255352B (zh) 目标检测方法、装置及系统
WO2020125216A1 (zh) 一种行人重识别方法、装置、电子设备及计算机可读存储介质
WO2018113523A1 (zh) 一种图像处理方法及装置、存储介质
CN109815843B (zh) 图像处理方法及相关产品
CN109766779B (zh) 徘徊人员识别方法及相关产品
US9754192B2 (en) Object detection utilizing geometric information fused with image data
CN106650662B (zh) 目标对象遮挡检测方法及装置
CN108256404B (zh) 行人检测方法和装置
WO2021139324A1 (zh) 图像识别方法、装置、计算机可读存储介质及电子设备
CN109740444B (zh) 人流量信息展示方法及相关产品
WO2018014828A1 (zh) 识别二维码位置的方法及其系统
WO2019033572A1 (zh) 人脸遮挡检测方法、装置及存储介质
US20150339536A1 (en) Collaborative text detection and recognition
WO2019033569A1 (zh) 眼球动作分析方法、装置及存储介质
TW202026948A (zh) 活體檢測方法、裝置以及儲存介質
WO2020056914A1 (zh) 人群热力图获得方法、装置、电子设备及可读存储介质
CN109840885B (zh) 图像融合方法及相关产品
CN109740415A (zh) 车辆属性识别方法及相关产品
CN111008935B (zh) 一种人脸图像增强方法、装置、系统及存储介质
JP2015106197A (ja) 画像処理装置、画像処理方法
CN109815839B (zh) 微服务架构下的徘徊人员识别方法及相关产品
WO2018210039A1 (zh) 数据处理方法、数据处理装置及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18803197

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18803197

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS (EPO FORM 1205A DATED 16.03.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18803197

Country of ref document: EP

Kind code of ref document: A1