WO2021077984A1 - 对象识别方法、装置、电子设备及可读存储介质 - Google Patents

对象识别方法、装置、电子设备及可读存储介质 Download PDF

Info

Publication number
WO2021077984A1
WO2021077984A1 PCT/CN2020/117764 CN2020117764W WO2021077984A1 WO 2021077984 A1 WO2021077984 A1 WO 2021077984A1 CN 2020117764 W CN2020117764 W CN 2020117764W WO 2021077984 A1 WO2021077984 A1 WO 2021077984A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
binary
mask
occluded
recognized
Prior art date
Application number
PCT/CN2020/117764
Other languages
English (en)
French (fr)
Inventor
宋凌雪
龚迪洪
李志锋
刘威
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Publication of WO2021077984A1 publication Critical patent/WO2021077984A1/zh
Priority to US17/520,612 priority Critical patent/US20220058426A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/53Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/5854Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using shape and object relationship
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/2163Partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/28Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/759Region-based matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Definitions

  • This application relates to artificial intelligence technology, and in particular to an artificial intelligence-based object recognition method, device, electronic equipment, and computer-readable storage medium.
  • AI Artificial Intelligence
  • Deep Learning is a multi-disciplinary interdisciplinary, involving probability theory, statistics, approximation theory, convex analysis, algorithm complexity theory and other disciplines. Specializing in the study of how computers simulate or realize human learning behaviors in order to acquire new knowledge or skills, and reorganize the existing knowledge structure to continuously improve its own performance. Deep learning usually includes techniques such as artificial neural networks, belief networks, reinforcement learning, transfer learning, and inductive learning.
  • the embodiments of the present application provide an artificial intelligence-based object recognition method, device, and computer-readable storage medium, which can maintain the recognition accuracy of recognizing non-occluded objects, and can improve the recognition accuracy of recognizing partially occluded objects.
  • the embodiment of the present application provides an object recognition method based on artificial intelligence, the method is executed by an electronic device, and the method includes:
  • Detecting a potential occlusion area of the object to be identified in the image to be identified to obtain a binary image that characterizes the occlusion area and the unoccluded area of the object to be identified;
  • the matching relationship between the image to be identified and the pre-stored object image is determined.
  • the embodiment of the application provides an object recognition device based on artificial intelligence, including:
  • the occlusion detection module is configured to detect the potential occlusion area of the object to be recognized in the image to be recognized, so as to obtain a binary image that characterizes the occlusion area and the unoccluded area of the object to be recognized;
  • a occluded binary image block acquisition module configured to acquire a occluded binary image block that characterizes the occlusion area from the binary image
  • a binary mask query module configured to query the mapping relationship between the occluded binary image block and the binary mask included in the binary mask dictionary based on the occluded binary image block to obtain the occluded binary image The binary mask of the block;
  • a binary mask synthesis module configured to synthesize binary masks queried based on each of the occluded binary image blocks to obtain a binary mask corresponding to the binary image;
  • the matching relationship determination module is configured to determine the matching relationship between the to-be-recognized image and the pre-stored target image based on the binary mask corresponding to the binary image, the characteristics of the pre-stored target image, and the characteristics of the to-be-recognized image.
  • An embodiment of the present application provides an electronic device, and the electronic device includes:
  • a memory configured to store executable instructions for implementing the artificial intelligence-based object recognition method provided in the embodiments of the present application
  • the processor is configured to execute the executable instructions stored in the memory to implement the artificial intelligence-based object recognition method provided in the embodiment of the present application.
  • An embodiment of the present application provides a computer storage medium that stores executable instructions in the computer storage medium.
  • the computer executable instructions are executed by a processor, the artificial intelligence-based object recognition method provided in the embodiments of the present application is implemented.
  • the artificial intelligence-based object recognition method distinguishes the occluded area and the unoccluded area in the image to be recognized, and obtains the binary mask of the occluded area in the image to be recognized, so as to be based on the binary mask and the image to be recognized And pre-stored images for image recognition, so that when the object to be recognized is occluded, the impact of the feature elements of the object to be recognized in the occluded area is suppressed, and the accuracy of the recognition of the occluded object is greatly improved.
  • FIG. 1 is a schematic diagram of occlusion recognition through a mask network in the related art
  • Fig. 2 is a schematic diagram of an application scenario of an artificial intelligence-based object recognition system provided by an embodiment of the present application
  • FIG. 3 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Fig. 4 is a schematic flowchart of an artificial intelligence-based object recognition method provided by an embodiment of the present application.
  • 5A-5D are schematic flowcharts of an artificial intelligence-based object recognition method provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a process of performing object recognition in an artificial intelligence-based object recognition system provided by an embodiment of the present application
  • FIG. 7 is a schematic diagram of face image segmentation in an artificial intelligence-based object recognition method provided by an embodiment of the present application.
  • Fig. 8 is a schematic structural diagram of a paired differential twin network in an artificial intelligence-based object recognition method provided by an embodiment of the present application;
  • FIG. 9 is a schematic diagram of the calculation process of each index item M j in the binary mask dictionary of the artificial intelligence-based object recognition method provided by an embodiment of the present application.
  • FIG. 10 is a schematic flow chart of synthesizing a binary mask M of a face image to be recognized in the artificial intelligence-based object recognition method provided by an embodiment of the present application;
  • FIG. 11 is a schematic diagram of feature extraction in an artificial intelligence-based object recognition method provided by an embodiment of the present application.
  • Fig. 12 is a schematic diagram of model construction of an artificial intelligence-based object recognition method provided by an embodiment of the present application.
  • first ⁇ second involved only distinguishes similar objects, and does not represent a specific order for the objects. Understandably, “first ⁇ second” can be used if permitted.
  • the specific order or sequence is exchanged, so that the embodiments of the present application described herein can be implemented in a sequence other than those illustrated or described herein.
  • Convolution feature f( ⁇ ) In this article, it refers to the output of the convolutional layer of the convolutional neural network, usually a three-dimensional tensor with C channels, height H, and width W, namely f( ⁇ ) ⁇ R C*H*W .
  • FIG. 1 is a schematic diagram of occlusion recognition through a mask network in the related art.
  • a mask network module is embedded in the middle layer of the basic convolutional neural network to form a recognition network.
  • the module uses two-layer convolution to directly learn a set of weights M(i,j) from the input object image, through After the convolutional layer performs feature extraction processing on the input image, it performs maximum pooling processing through the pooling layer, and then performs feature extraction processing on the input image through the convolutional layer, and then performs maximum pooling processing through the pooling layer to obtain a set of weights Value M(i,j), each weight is multiplied by the feature of the corresponding spatial position of the layer convolution feature in the basic convolutional network, through end-to-end training and learning, the module outputs a higher weight for useful features , Output a lower weight for the features that are damaged by occlusion, so as to achieve the purpose of reducing the impact of occlusion.
  • the branch of the mask network module of this scheme outputs the same weight to the feature elements of all channels at the same spatial position on the convolution feature, that is, it is considered that the feature elements of each channel of the convolution feature are affected by the occlusion in the same way, such as As shown in Figure 1, the original feature U is transformed to the weighted feature V. In the channel dimension, the feature elements have not undergone different weighting processing.
  • the analysis and experimental verification of this application have found that even for the same space on the convolution feature Position, the change of the characteristic element value of each channel at this position under occlusion conditions is also quite different.
  • the related technical solutions have loopholes in principle, and in the application scenario of the object recognition system, it is usually calculated A piece of similarity between the feature of the object to be recognized and the feature of each object in the database is then recognized.
  • the idea of the scheme shown in Figure 1 is only to reduce the influence of the occluded part of the feature of the occluded object to be recognized.
  • this solution only makes the sunglasses part as little as possible to affect the features of the test object, and there is no occlusion in the database Under the network structure of this scheme, the object will still retain the features of the original part occluded by the sunglasses. Therefore, when calculating the similarity, the area of the original part will still cause a strong inconsistency effect, so the original part being occluded will actually affect the original part. It still exists.
  • the problem to be solved in this application is to propose an object recognition system that is robust to occlusion based on a deep convolutional network with good performance in general recognition scenes (no occlusion or less occlusion).
  • a deep convolutional network with good performance in general recognition scenes (no occlusion or less occlusion).
  • a paired differential twin network structure is proposed to explicitly learn the mapping relationship between the occluded area and the feature elements damaged by the occlusion. Based on this mapping relationship, a binary mask dictionary is established. Each index item in the dictionary represents the feature elements that are affected greatly when a certain area is occluded. According to this dictionary, the feature elements that should be removed under any occlusion conditions can be obtained, and the response value of these elements can be suppressed during recognition. So as to achieve robustness to occlusion.
  • the embodiments of the application provide an object recognition method, device, electronic device, and computer-readable storage medium based on artificial intelligence, which can suppress the influence of the feature elements of the object to be recognized by the occluded area when the object to be recognized is occluded , So that the accuracy of the recognition of the occluded object is greatly improved.
  • the following describes exemplary applications of the artificial intelligence-based object recognition device provided in the embodiment of the application.
  • the electronic device provided in the embodiment of the application can be implemented as a laptop computer, a tablet computer, and a desktop computer.
  • Various types of user terminals such as computers, set-top boxes, and mobile devices (for example, mobile phones, portable music players, personal digital assistants, dedicated messaging devices, and portable game devices) can also be implemented as servers.
  • an exemplary application when the device is implemented as a server will be explained.
  • the server 200 may be an independent physical server, a server cluster or a distributed system composed of multiple physical servers, or it may provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, Cloud servers for basic cloud computing services such as network services, cloud communications, middleware services, domain name services, security services, CDN, and big data and artificial intelligence platforms.
  • the terminal 400 may be a smart phone, a tablet computer, a notebook computer, a desktop computer, a smart speaker, a smart watch, etc., but is not limited to this.
  • the terminal and the server can be directly or indirectly connected through wired or wireless communication, which is not limited in the embodiment of the present invention.
  • FIG. 2 is a schematic diagram of an application scenario of an artificial intelligence-based object recognition system provided by an embodiment of the present application.
  • the object recognition system 100 also includes: a terminal 400, a network 300, a server 200, and a database 500.
  • the terminal 400 passes through the network 300.
  • the network 300 may be a wide area network or a local area network, or a combination of the two.
  • the image to be recognized is collected by the camera of the terminal 400, and in response to receiving the object recognition request of the terminal 400, the server 200 reads the pre-stored in the database 500 , And determine the matching relationship between the image to be recognized and the pre-stored target image, the server 200 returns the determined matching relationship as the object recognition result to the display interface of the terminal 400 to display it.
  • FIG. 3 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the server 200 shown in FIG. 3 includes: at least one processor 210, a memory 250, at least one network interface 220 and a user interface 230.
  • the various components in the server 200 are coupled together through the bus system 240.
  • the bus system 240 is used to implement connection and communication between these components.
  • the bus system 240 also includes a power bus, a control bus, and a status signal bus.
  • various buses are marked as the bus system 240 in FIG. 3.
  • the processor 210 may be an integrated circuit chip with signal processing capabilities, such as a general-purpose processor, a digital signal processor (DSP, Digital Signal Processor), or other programmable logic devices, discrete gates or transistor logic devices, or discrete hardware Components, etc., where the general-purpose processor may be a microprocessor or any conventional processor.
  • DSP Digital Signal Processor
  • the user interface 230 includes one or more output devices 231 that enable the presentation of media content, including one or more speakers and/or one or more visual display screens.
  • the user interface 230 also includes one or more input devices 232, including user interface components that facilitate user input, such as a keyboard, a mouse, a microphone, a touch screen display, a camera, and other input buttons and controls.
  • the memory 250 may be removable, non-removable, or a combination thereof.
  • Exemplary hardware devices include solid-state memory, hard disk drives, optical disk drives, and so on.
  • the memory 250 includes one or more storage devices that are physically remote from the processor 210.
  • the memory 250 includes volatile memory or nonvolatile memory, and may also include both volatile and nonvolatile memory.
  • the non-volatile memory may be a read only memory (ROM, Read Only Memory), and the volatile memory may be a random access memory (RAM, Random Access Memory).
  • ROM read only memory
  • RAM Random Access Memory
  • the memory 250 described in the embodiment of the present application is intended to include any suitable type of memory.
  • the memory 250 can store data to support various operations. Examples of these data include programs, modules, and data structures, or a subset or superset thereof, as illustrated below.
  • the operating system 251 includes system programs for processing various basic system services and performing hardware-related tasks, such as a framework layer, a core library layer, a driver layer, etc., for implementing various basic services and processing hardware-based tasks;
  • the network communication module 252 is used to reach other computing devices via one or more (wired or wireless) network interfaces 220.
  • Exemplary network interfaces 220 include: Bluetooth, Wireless Compatibility Authentication (WiFi), and Universal Serial Bus ( USB, Universal Serial Bus), etc.;
  • the presentation module 253 is used to enable the presentation of information via one or more output devices 231 (for example, a display screen, a speaker, etc.) associated with the user interface 230 (for example, a user interface for operating peripheral devices and displaying content and information) );
  • output devices 231 for example, a display screen, a speaker, etc.
  • the user interface 230 for example, a user interface for operating peripheral devices and displaying content and information
  • the input processing module 254 is configured to detect one or more user inputs or interactions from one of the one or more input devices 232 and translate the detected inputs or interactions.
  • FIG. 3 shows an artificial intelligence-based object recognition device 255 stored in a memory 250, which can be software in the form of programs and plug-ins. It includes the following software modules: occlusion detection module 2551, occlusion binary image block acquisition module 2552, binary mask query module 2553, binary mask synthesis module 2554, matching relationship determination module 2555, binary mask dictionary building module 2556, Object recognition model training module 2557 and affine transformation module 2558. These modules are logical, so they can be combined or split further according to the realized functions. The functions of each module will be described below.
  • the artificial intelligence-based object recognition apparatus provided in the embodiments of the present application may be implemented in hardware.
  • the artificial intelligence-based object recognition apparatus provided in the embodiments of the present application may use a hardware decoding processor.
  • a processor in the form of a processor which is programmed to execute the artificial intelligence-based object recognition method provided in the embodiments of the present application.
  • a processor in the form of a hardware decoding processor may adopt one or more application specific integrated circuits (ASIC, Application Specific Integrated Circuit). Integrated Circuit), DSP, Programmable Logic Device (PLD, Programmable Logic Device), Complex Programmable Logic Device (CPLD, Complex Programmable Logic Device), Field Programmable Gate Array (FPGA, Field-Programmable Gate Array) or other electronic components .
  • ASIC Application Specific Integrated Circuit
  • DSP Programmable Logic Device
  • PLD Programmable Logic Device
  • CPLD Complex Programmable Logic Device
  • FPGA Field-Programmable Gate Array
  • the following describes the artificial intelligence-based object recognition method provided by the embodiments of the present application in two stages.
  • the first part is the training stage of the model
  • the second part is the recognition stage using the model.
  • Figure 4 is a schematic flowchart of the artificial intelligence-based object recognition method provided by an embodiment of the present application, will be described in conjunction with steps 101-104 shown in Figure 4, the steps of the following method It can be implemented on any type of electronic device (such as a terminal or a server) described above.
  • step 101 based on the object image database, a training sample set consisting of object image sample pairs numbered for different positions is constructed; wherein the object image sample pairs include object image samples and object image samples subjected to occlusion processing.
  • the object here can be a person, an animal or an object.
  • the occlusion recognition can be based on the object recognition model used for face recognition.
  • the occlusion recognition can be based on the object used for animal facial recognition.
  • the recognition model can be used to identify a certain animal species or different animal categories.
  • occlusion recognition can be performed based on an object recognition model that is specifically used to recognize certain types of items.
  • a training sample set can also be constructed first.
  • the training sample set is constructed based on the object image database.
  • step 101 based on the object image database, construct a The training sample set composed of object image sample pairs can be realized by the following technical solution: obtaining object image samples in the object image database, and uniformly segmenting the object image samples to obtain position numbers corresponding to different object image sample blocks; The object image sample block corresponding to the position number in the object image sample is subjected to occlusion processing; the object image sample and the object image sample subjected to the occlusion processing are constructed into a pair of object image samples with position numbers; object image samples based on different position numbers Yes, form a set of training samples.
  • the object image sample is uniformly divided, for example, after the uniform segmentation, 12 object image sample blocks are formed, and the 12 object image sample blocks are numbered correspondingly, and each object image sample block corresponds to one Position number, for the object image sample block corresponding to the position number in the object image sample, perform occlusion processing, for example, for position number 11, perform occlusion processing on the object image sample block corresponding to position number 11 to obtain an object image Sample pair, this object image sample pair includes the original object image sample that has not been occluded and the object image sample after the corresponding object image sample block is occluded.
  • multiple object image samples can be constructed Yes, although the objects in different object image sample pairs are different, they are all occluded at the same position.
  • a paired differential twin network model is constructed based on the basic object recognition model and the mask generation model.
  • a pair of differential twin network models are constructed based on the basic object recognition model and the mask generation model, and then based on the training sample set, the pair of differential twin network models are trained, based on the trained pair of differential twin network models, Construct a binary mask dictionary, where the index of the binary mask dictionary is to block the binary image block, and the index item of the binary mask dictionary is the binary mask.
  • the paired differential twin network model consists of two identical basic object recognition models.
  • the structure of the basic object recognition model is based on the convolutional neural network.
  • the absolute difference between the features extracted by the two basic object recognition models Value as an attention mechanism allows the mask generation model to process absolute values.
  • the essence of the mask generation model is to focus on the characteristic elements that are affected by the occlusion.
  • the mask generation model is composed of common neural network units, including the batch layer. Convolutional layer and so on.
  • the process of training a paired twin differential network is actually equivalent to training a mask generation model.
  • the basic object recognition model is a general model that has been trained to perform object recognition. In the process of training a paired differential twin network, the basic object The parameters of the recognition model are fixed, and only the parameters of the mask generation model are updated.
  • step 103 a paired differential twin network model is trained based on the training sample set.
  • the paired differential twin network model is trained, which is specifically implemented by the following technical solutions.
  • the mask generation model in the paired differential twin network model is initialized, and the initialization includes input samples and input Sample features, classification probabilities, and the loss function of mask generation model parameters; the following processing is performed during each iteration of the paired differential twin network model: the object image sample pair included in the training sample set is used as the input sample, and the pair
  • the differential twin network model extracts the features of the input samples to obtain the features of the input samples; classifies and recognizes the occluded object image samples through the object recognition model to obtain the classification probability; substitutes the input sample, the input sample characteristics and the classification probability into the loss function, To determine the corresponding paired differential twin network model parameters when the loss function obtains the minimum value; update the paired differential twin network model according to the determined mask generation model parameters.
  • the training sample set includes object image sample pairs with different position numbers
  • the paired differential twin network model is trained for the object image sample pairs with a certain position number
  • the mask obtained after training The generative model is the mask generation model for the position number, to find out through the mask generation model that when the image block of each position number is occluded, the convolutional features of the target image are affected by the occlusion and should be suppressed. element.
  • the mask generation model is initialized, and input samples in the loss function, output results, and parameters of the mask generation model are initialized, where the output results include input sample characteristics and classification probabilities.
  • the input sample here is the target image sample pair included in the training sample set.
  • the sample pair for the corresponding position number is used for training, and the paired differential twin network model is used to input
  • the sample is feature extracted to obtain the input sample feature, where the input sample feature is obtained after the mask generation model is processed.
  • the object image sample pairs included in the training sample set are used as input samples, and the input samples are feature extracted through the paired differential twin network model, and the process of obtaining the input sample features can be specifically It is achieved by the following technical solution, taking the object image sample pairs numbered for the same position in the training sample set as the input samples, and extracting the features of the input samples through the convolutional layer in the paired differential twin network model to obtain respectively corresponding object image samples And the first feature and the second feature of the object image sample that has undergone occlusion processing; the mask generation model in the paired differential twin network model is used to mask the absolute value of the difference between the first feature and the second feature, A mask for the position number is obtained; the first feature and the second feature are respectively multiplied by the mask to obtain the input sample feature.
  • the mask generation model here is composed of common neural network units, including normalization layers, convolutional layers, etc., and the features obtained by the convolutional layer are mapped to the range of [0,1], which is obtained by the mask generation model
  • the corresponding elements of are multiplied, and the new convolution feature is obtained as the input sample feature.
  • the object image samples subjected to the occlusion processing are classified and identified through the object recognition model to obtain the classification probability.
  • the classification probability here may be the probability of correct classification.
  • the probability of the classification correct and the characteristics of the input sample are used to determine the classification probability.
  • the code generation model is revised and updated, that is, the input sample, input sample characteristics and classification probability are substituted into the loss function to determine the corresponding paired differential twin network model parameters when the loss function obtains the minimum value; according to the determined mask generation model parameters, update to Pair the differential twin network model.
  • a binary mask dictionary is constructed based on the trained pairwise differential twin network model; where the index of the binary mask dictionary is to block the binary image block, and the index item of the binary mask dictionary is the binary value.
  • Mask is the binary value.
  • a binary mask dictionary is constructed. Specifically, it can be realized by the following technical solutions.
  • the paired differential twin network model is used to number the object image samples at the same position Perform mask extraction to obtain the mask set corresponding to the position number; normalize each mask in the mask set, and calculate the average based on the normalization result of each mask to determine the corresponding position Numbered average mask; the occluded binary image block corresponding to the position number is used as the index of the binary mask dictionary, and the average mask is binarized to use the generated binary mask as the index of the binary mask dictionary Index entry.
  • the paired differential twin network that has been trained is used to extract the mask set for a position numbered object image sample pair in the training sample set.
  • mask The code set includes N masks, and normalizes each of the N masks, and calculates the average based on the normalized result of each mask to determine the average mask of the corresponding position number .
  • the occlusion binary image block corresponding to the position number is used as the index of the binary mask dictionary, and the average mask is binarized to use the generated binary mask as the index item of the binary mask dictionary.
  • the smallest ⁇ *K mask values in the average mask are corresponding
  • k represents the k-th mask value
  • the basic object recognition for acquiring the features of the pre-stored object image and the features of the image to be recognized can be trained Model; Based on the training sample set, train an object recognition model used to determine the matching relationship between the image to be recognized and the pre-stored object image; wherein the object recognition model includes a basic object recognition model and a binary mask processing module.
  • training an object recognition model used to determine the matching relationship between the image to be recognized and the pre-stored object image can be implemented by the following technical solutions to initialize the full connection of the object recognition model Layer, and initialize the loss function including the input samples, classification and recognition results, and the parameters of the fully connected layer in the object recognition model; the following processing is performed during each iteration of the object recognition model training: the occluded objects included in the training sample set
  • the image sample and the corresponding binary mask in the binary mask dictionary are used as the input sample, and the input sample is classified and recognized through the object recognition model to obtain the classification recognition result of the corresponding input sample; the input sample and the classification recognition result are substituted into the loss function ,
  • FIG. 5A is an optional flowchart of an artificial intelligence-based object recognition method provided by an embodiment of the present application. It will be described in conjunction with steps 201-205 shown in FIG. 5A. The steps of the following method can be described above It can be implemented on any type of electronic equipment (such as a terminal or a server).
  • step 201 the potential occlusion area of the object to be recognized in the image to be recognized is detected to obtain a binary image that characterizes the occlusion area and the unoccluded area of the object to be recognized.
  • the potential occlusion area of the object to be recognized indicates that the object to be recognized can be occluded or not.
  • 0 represents a non-occluded pixel
  • 1 represents an occluded pixel.
  • the full convolutional neural network structure is used to perform occlusion detection on the object to be recognized in the image to be recognized.
  • the full convolutional network structure here is obtained through training based on artificially synthesized occlusion data and self-labeled real occlusion data.
  • step 202 a occluded binary image block representing the occluded area is obtained from the binary image.
  • obtaining the occluded binary image block representing the occlusion area from the binary image in step 202 can be implemented through the following steps 2021-2023.
  • step 2021 the binary image is divided into a plurality of binary image blocks.
  • step 2022 the ratio of the number of occluded pixels in each binary image block obtained by segmentation is determined.
  • step 2023 when the ratio of the number of occluded pixels exceeds the number ratio threshold, the corresponding binary image block is determined as the occluded binary image block that represents the occluded area.
  • the binary image is uniformly divided to obtain multiple binary image blocks, for example, the binary image is divided into 25 binary image blocks, each row has 5 image blocks, and each column also has 5 image blocks.
  • the size of each image block is the same, and each binary image block has its own position number.
  • the image block in the second position in the first row can be numbered 12, and the fourth position in the third row
  • the image block can be numbered 34.
  • the occlusion judgment is performed on each binary image to determine all the occluded binary image blocks that represent the occlusion area of the binary image block.
  • Some binary image blocks have partially occluded pixels, but in the binary image block If the number of occluded pixels is relatively small, these binary image blocks will not be judged as occluded binary image blocks.
  • the binary image blocks in which the proportion of occluded pixels in the binary image block exceeds the number ratio threshold these binary image blocks The block is judged to be a occluded binary image block, that is, the ratio of the number of occluded pixels in each binary image block obtained by segmentation is first determined. When the ratio of the number of occluded pixels exceeds the number ratio threshold, the corresponding binary image block is determined as a sign The occluded binary image block of the occluded area.
  • step 203 based on the obtained occlusion binary image block, query the mapping relationship between the occlusion binary image block and the binary mask included in the binary mask dictionary to obtain the binary mask corresponding to the occlusion binary image block .
  • querying the mapping relationship between the occluded binary image block and the binary mask included in the binary mask dictionary in step 203 can be implemented through the following steps 2031-2032.
  • step 2031 the position number of the corresponding occluded binary image block is obtained.
  • step 2032 based on the position number of the corresponding occluded binary image block, query the mapping relationship between the position number of the occluded binary image block and the binary mask in the binary mask dictionary.
  • the position number here is the position number described above, and the mapping relationship between each occluded binary image block and the binary mask M is recorded in the binary mask dictionary, because the occluded binary image Blocks have a one-to-one correspondence with their respective position numbers, so by querying the mapping relationship between the position numbers and the binary masks of each occluded binary image block with the position numbers, the binary mask corresponding to the occluded binary image block can be obtained.
  • the binary mask can characterize the convolution feature elements affected by the corresponding occluded binary image block. The convolution feature elements that are more affected can be suppressed by the 0 value in the binary mask, and the convolution that is more affected The characteristic element can be retained by the 1 value in the binary mask.
  • step 204 the binary masks queried based on each occluded binary image block are synthesized to obtain a binary mask corresponding to the binary image.
  • the binary masks queried for each occlusion binary image block are synthesized, where the synthesis may be an OR logic operation, for example, for the occlusion binary images corresponding to numbers 12, 13, and 14.
  • the queried binary masks are M 12 , M 13 and M 14 respectively .
  • step 205 based on the binary mask of the corresponding binary image, the characteristics of the pre-stored object image, and the characteristics of the image to be recognized, the matching relationship between the image to be recognized and the pre-stored object image is determined.
  • step 205 based on the binary mask of the corresponding binary image, the characteristics of the pre-stored object image, and the characteristics of the image to be recognized, the matching relationship between the image to be recognized and the pre-stored object image can be determined by the following Steps 2051-2053 are implemented.
  • step 2051 the characteristics of the pre-stored object image and the characteristics of the image to be recognized are determined.
  • step 2052 the binary mask is respectively multiplied with the feature of the pre-stored object image and the feature of the image to be recognized to obtain the pre-stored feature corresponding to the pre-stored object image and the feature to be recognized corresponding to the image to be recognized.
  • step 2053 the similarity between the pre-stored feature and the feature to be identified is determined.
  • the similarity exceeds the similarity threshold, it is determined that the object included in the image to be identified and the object included in the pre-stored object image belong to the same category.
  • the features of the pre-stored object image and the feature of the image to be recognized are determined, and the binary mask is multiplied with the feature of the pre-stored object image and the feature of the image to be recognized, respectively, to obtain the pre-stored feature and the feature of the corresponding pre-stored object image.
  • the similarity between the pre-stored feature and the to-be-recognized feature is determined, and when the similarity exceeds the similarity threshold, it is determined that the object included in the to-be-recognized image and the object included in the pre-stored object image belong to the same category.
  • feature extraction is performed on the pre-stored object image and the image to be recognized through the basic object recognition model, and the pre-stored features of the pre-stored object image and the to-be-recognized feature of the image to be recognized are determined by using the binary mask in the object recognition model.
  • the processing module multiplies the binary mask with the pre-stored feature of the pre-stored object image and the feature to be identified of the image to be identified to obtain the pre-stored feature of the pre-stored object image and the feature to be identified corresponding to the image to be identified.
  • the cosine similarity between the pre-stored feature and the feature to be recognized is calculated, because in the feature extraction stage, the feature extracted from the pre-stored clean and unobstructed object image is also multiplied by the binary mask, Therefore, it can be ensured that the calculation of the similarity is performed based on the unoccluded part of the object in the image to be recognized, and corresponding to the corresponding part in the clean unoccluded object image, for example, for the part of the eye that is occluded For human faces, the calculation of similarity is based on the parts other than the eyes. Even for the pre-stored clean face images, the final features extracted are still parts other than the eyes. It can ensure that the image to be identified and the pre-stored object image retain similar information. When the similarity exceeds the similarity threshold, it is determined that the object included in the image to be identified and the object included in the pre-stored object image belong to the same category.
  • step 201 the following technical solutions may be performed to detect the key points of the object to be recognized in the image to be recognized, and determine the coordinate position of the key point; according to the coordinate position of the key point, the object to be recognized Perform affine transformation to align the key points to the standard template position consistent with the pre-stored object image.
  • the key points of the object to be recognized are affine transformed to the standard template position, thereby reducing the need to recognize the object Recognition error caused by different position and posture.
  • the object recognition method in this application can be applied to any face recognition scenarios, such as attendance systems, monitoring and tracing systems, security inspection systems, mobile phone computer unlocking, and so on.
  • the user only needs to register a frontal unobstructed face image in the system when the system is initialized and store it in the system database as a pre-stored face image, and only need to obtain the user's image to be recognized when performing recognition.
  • FIG. 6 is a schematic diagram of a process of performing object recognition in an artificial intelligence-based object recognition system provided by an embodiment of the present application.
  • the preprocessing module performs face detection and alignment processing.
  • the input face image 601 to be recognized is preprocessed through the preprocessing module.
  • the preprocessing process first, the face in the input face image to be recognized is detected, and Position the coordinate positions of the left eye, right eye, nose, left mouth corner, and right mouth corner. Then, according to the coordinate positions of the five key points, the face in the input face image to be recognized is affine transformed to align to The uniform template position is cut into a fixed size, thereby obtaining the aligned face image 602 to be recognized.
  • the occlusion detection module detects the occlusion area.
  • the occlusion detection module performs occlusion detection on the face image to be recognized, detects the partially occluded area on the face image to be recognized, and outputs a binary image 603 of the same size as the face image to be recognized. , 0 represents non-occluded pixels, 1 represents occluded pixels.
  • the face key point detection used here is based on a multi-task convolutional neural network.
  • the occlusion detection used here is based on a full convolutional network structure.
  • the training samples include artificially synthesized occlusion data and self-labeled real occlusion data.
  • the mask generation module generates a binary mask M, where the mask generation module is the top-level convolution feature mask generation module.
  • the mask generation module receives the detection results of the occlusion detection of the face image to be recognized, and then the binary The binary mask M of the face image to be recognized is synthesized in the mask dictionary.
  • the recognition module extracts features and performs face identification or authentication.
  • the basic convolutional neural network and the binary mask of the image to be recognized are used to extract the aligned face image to be recognized and the system database.
  • Pre-stored facial image features through the classification module in the recognition module, according to the acquired features of the facial image to be recognized and the features of the pre-stored facial image to recognize the face image to be recognized, in the application scenario of face authentication, output
  • the result represents whether the face image to be recognized and the pre-stored face image in the system database are the same person.
  • the output result represents the category of the face image in the system database to which the face image to be recognized belongs, namely Output whether the face category to be recognized and the pre-stored face belong to the same category.
  • the index in the dictionary is the occlusion block of the face image, and the index item is the binary mask.
  • the dictionary is generated for a basic face recognition model, such as the backbone volume. Convolutional Neural Networks (Trunk CNN, Convolutional Neural Networks), this Trunk CNN is also the model used by the recognition module.
  • the construction of the binary mask dictionary is divided into two steps: the training of the mask generation model and the establishment of a binary mask dictionary based on the trained mask generation model, that is, the binary mask dictionary.
  • FIG. 7 is a schematic diagram of the segmentation of a face image in an artificial intelligence-based object recognition method provided by an embodiment of the present application.
  • a face image is divided into 5*5 blocks, and a mask is trained for each face image block.
  • Code generation model (MG, Mask G enerator), where the mask generation model is the aforementioned mask generation model.
  • the purpose of each MG is to find out, when a certain b j on the face is occluded, the face Among the top-level convolution features of the image, elements that are greatly affected by occlusion and should reduce their response value.
  • the embodiment of the present application provides a Pairwise Differential Siamese Network (PD SN) structure to learn each MG.
  • PD SN Pairwise Differential Siamese Network
  • FIG 8 is a schematic diagram of the structure of the paired differential twin network in the artificial intelligence-based object recognition method provided by the embodiment of the present application.
  • the paired differential twin network is composed of two identical Trunk CNNs.
  • the overall input of the PDSN network is paired face images x i represents a clean and unobstructed face, Represents a occluded face, and N represents the logarithm of the face pair. Face images belonging to the same category as x i, the only difference is The b j blocks on the face are obscured.
  • Paired face images extract their top-level convolutional features separately through shared Trunk CNN
  • the absolute value of the difference between the top-level convolution features of the two plays the role of the attention mechanism, making the MG pay attention to the characteristic elements that have been changed by the occlusion.
  • the core module in the paired differential twin network is MG.
  • MG is composed of common neural network units, including batch normalization (BN), convolutional layer, etc., and finally through the logistic regression activation function
  • the output value of the MG is mapped to the range of [0,1].
  • the MG outputs a mask of the same size as the top-level convolution feature It is a three-dimensional tensor with the same size as the top-level convolution feature.
  • the convolution feature f( ⁇ ) here refers to the output of the convolutional layer of the convolutional neural network, which is usually a three-dimensional tensor with C channels, height H, and width W, that is, f( ⁇ ) ⁇ R C* H*W
  • the convolution feature element here refers to the tensor element with coordinates (c, h, w)
  • the feature element at the same spatial position of the convolution feature here refers to C with the same h-dimensional and w-dimensional coordinates Elements of a channel.
  • the loss function in the training process is composed of two kinds of loss functions, including the classification loss function l cls and the contrast loss function l diff .
  • the purpose of the classification loss function is to occlude the top-level convolution feature of the face and multiply the mask. New feature It can improve the recognition rate of the Trunk CNN classifier, so that the MG assigns a lower mask value to the feature elements that hinder recognition;
  • the purpose of the comparison loss function is to block the new features of the face Convolutional features of the corresponding clean face As close as possible, so that the MG assigns a lower mask value to the feature elements with large differences between the two.
  • the combined effect of the two loss functions can promote the MG to occlude the convolutional features of the face and the convolutional features of the clean face.
  • the elements that have large differences and affect recognition are given low mask values. These elements are the elements that are occluded and destroyed by this scheme. Therefore, the loss function is constructed as:
  • F represents the fully connected layer or the average pooling layer behind the top convolutional layer of Trunk CNN, Indicates the probability that Trunk CNN is classified correctly.
  • the face area is divided into B*B non-overlapping areas, so a total of B*B MGs need to be trained.
  • the Trunk CNN parts of these MGs are all the same with fixed parameters, and their training data comes from the same database.
  • the output of the MG obtains when each block of the face image is occluded, the face image
  • the weakened elements correspond to the lower value in the MG output.
  • the index of the binary mask dictionary is the face block b j
  • the index item is a binary mask M j .
  • the mask M j has the same size as the top-level convolution feature of Trunk CNN, and the value 0 in M j represents the face The feature element that should be removed from the recognition when the block b j is occluded.
  • FIG. 9 is a schematic diagram of the calculation process of each index item M j in the binary mask dictionary of the artificial intelligence-based object recognition method provided by an embodiment of the present application.
  • step 901 input multiple face image sample pairs into the above-mentioned trained PDSN to obtain a series of output mask sets of MG.
  • This mask set can be N represents the number of sample pairs, and j represents the mask output for the MG with position number j.
  • the face image sample pairs here can be the same as the training samples used in the above-mentioned training MG process.
  • step 902 normalize each mask in the mask set generated in step 901, for example, The corresponding normalization formula is:
  • max() is the maximum value of the sample data
  • min() is the minimum value of the sample data
  • step 903 Calculate the average of these masks after normalization to obtain the average mask corresponding to the jth MG:
  • step 904 Binarize the average mask to obtain a binary dictionary index item, where the binary dictionary index item is the binary mask M j .
  • is a real number in the range of [0,1], which can be 0.25
  • a corresponding binary mask is generated for each MG, thereby constructing a binary mask dictionary corresponding to the occluded face image block and the binary mask:
  • the dictionary here is the occlusion block-mask dictionary.
  • FIG. 10 is a schematic flowchart of synthesizing the binary mask M of the face image to be recognized in the artificial intelligence-based object recognition method provided by an embodiment of the present application.
  • the occlusion face image block is determined.
  • the occlusion detection result is a binary image with the same size as the face image to be recognized. 0 represents non-occluded pixels, and 1 represents Occlusion pixels, when the number of pixels with a value of 1 in the range of a face image block in the occlusion detection result is greater than half of the total number of pixels in the range of the face image block, the face image block is determined to be occluded Face image block.
  • step 1002 Query the index item that occludes the face image block from the binary mask dictionary, and synthesize the binary mask M of the face to be recognized, where the index item is M j , the person shown in Figure 6 Take a face image as an example.
  • the occluded face blocks determined in step 1001 are b 12 , b 13 , b 14 , according to the binary mask established in the training phase Dictionary, the binary mask corresponding to the face image to be recognized is: among them Represents logical OR operation.
  • FIG. 11 is a schematic diagram of feature extraction in an artificial intelligence-based object recognition method provided by an embodiment of the present application.
  • the Trunk CNN used in the feature extraction stage is exactly the same as the parameters of the dictionary construction stage.
  • There is an additional branch inputting a binary mask M in the structure that is, an additional branch inputting a binary mask M on the basic object recognition model.
  • the parameters of the fully connected layer are fine-tuned through arbitrarily occluded face samples and their binary masks. All the previous parameters of the fully connected layer remain unchanged. Change, this fine-tuning stage uses a very small learning rate of 1e -4 , which can complete 6 times of training, and the loss function uses the same classification loss function as when training Trunk CNN.
  • the top-level convolution feature of the face image can be directly stored in the database.
  • the mask M is respectively combined with the top-level convolution feature of the face image to be recognized and the top-level convolution feature in the database.
  • the features are multiplied, and then the final feature vector used for classification is obtained through the fully connected layer or the average pooling layer fine-tuned by Trunk CNN.
  • s(p, g i ) is the feature vector f p and the feature vector of each face image in the database Cosine similarity.
  • the features of clean and unoccluded faces in the database are also multiplied by the mask M to ensure that the calculation of similarity is based on the unoccluded part of the face image to be recognized, that is, the face to be recognized.
  • the image and the face image features in the database retain similar information.
  • the nearest neighbor classifier can be used, that is, the face image category in the database with the highest similarity to the test face is the face image category.
  • other commonly used classifiers can also be used.
  • Threshold judgment can be used, that is, when the similarity between the two is higher than a certain threshold, it is considered the same person, and vice versa.
  • a classifier for face authentication can also be specially trained based on the feature vector.
  • FIG. 12 is a schematic diagram of model construction of an artificial intelligence-based object recognition method provided by an embodiment of the present application.
  • the database is not limited. It can be a public database with a common face or a user's own private database, as long as the preprocessing process of the training data is the same as the foregoing preprocessing process.
  • the model training process of the object recognition method provided by the embodiment of the application is as follows. In step 1201, a basic object recognition model is trained using a face database. In step 1202, the basic object recognition model parameters are fixed, and the (clean, occluded) face is used.
  • the sample pair trains B*B paired differential twin network models, and establishes a binary occlusion block-mask dictionary.
  • step 1203 fix the parameters before the Trunk CNN fully connected layer, and use arbitrary occluded faces and their corresponding Mask fine-tuning Trunk CNN's fully connected layer parameters.
  • the artificial intelligence-based object recognition device 255 may include: an occlusion detection module 2551, configured to detect the potential occlusion area of the object to be recognized in the image to be recognized, so as to obtain a binary image representing the occlusion area and the unoccluded area of the object to be recognized;
  • the value image block obtaining module 2552 is configured to obtain the occlusion binary image block representing the occlusion area from the binary image;
  • the binary mask query module 2553 is configured to query the binary mask dictionary based on the occlusion binary image block The included occluded binary image block and the binary mask are mapped to obtain the binary mask corresponding to the occluded binary image block;
  • the binary mask synthesis module 2554 is configured to query based on each occluded binary image block The binary mask of the corresponding binary image block
  • the occluded binary image block acquisition module 2552 is further configured to: divide the binary image into multiple binary image blocks; determine the ratio of the number of occluded pixels of each binary image block obtained by the division; When the number ratio of the occluded pixels exceeds the number ratio threshold, the binary image block is determined as the occluded binary image block that represents the occluded area.
  • the binary mask query module 2553 is further configured to: obtain the position number of the corresponding occluded binary image block; query the occlusion two in the binary mask dictionary based on the position number of the corresponding occluded binary image block. The mapping relationship between the position number of the value image block and the binary mask.
  • the matching relationship determination module 2555 is further configured to: determine the characteristics of the pre-stored object image and the characteristics of the image to be recognized; multiply the binary mask with the characteristics of the pre-stored object image and the characteristics of the image to be recognized, respectively, Obtain the pre-stored features corresponding to the pre-stored object image and the features to be identified corresponding to the image to be identified; determine the similarity between the pre-stored features and the features to be identified, and when the similarity exceeds the similarity threshold, determine the objects included in the image to be identified and the pre-stored objects The objects included in the image belong to the same category.
  • the artificial intelligence-based object recognition device 255 further includes: a binary mask dictionary construction module 2556, configured to construct a training sample set consisting of object image sample pairs numbered for different positions based on the object image database ; Among them, the object image sample pair includes the object image sample and the object image sample that has been occluded; based on the basic object recognition model and the mask generation model, a pair of differential twin network model is constructed; based on the training sample set, the pair of differential twin network is trained Model: Based on the trained pairwise differential twin network model, a binary mask dictionary is constructed; among them, the index of the binary mask dictionary is to occlude the binary image block, and the index item of the binary mask dictionary is the binary mask.
  • a binary mask dictionary construction module 2556 configured to construct a training sample set consisting of object image sample pairs numbered for different positions based on the object image database ; Among them, the object image sample pair includes the object image sample and the object image sample that has been occluded; based on the basic object recognition model and the mask generation
  • the binary mask dictionary building module 2556 is further configured to: obtain object image samples in the object image database, and perform uniform segmentation on the object image samples to obtain position numbers corresponding to different object image sample blocks; According to the position number, the object image sample is subjected to the occlusion processing corresponding to the object image sample block; the object image sample and the object image sample subjected to the occlusion processing are constructed into the object image sample pair for the position number; based on the object image with different position numbers Sample pairs form a training sample set.
  • the binary mask dictionary construction module 2556 is further configured to: initialize the mask generation model in the paired differential twin network model, and initialize the mask generation model including input samples, input sample features, classification probabilities, and mask generation The loss function of the model parameters; during each iteration of the paired differential twin network model, the following processing is performed: the object image sample pairs included in the training sample set are used as input samples, and the input samples are feature extracted through the paired differential twin network model , Get the input sample characteristics; classify and recognize the occluded object image samples through the object recognition model to obtain the classification probability; substitute the input sample, input sample characteristics and classification probability into the loss function to determine the corresponding value when the loss function obtains the minimum value Paired differential twin network model parameters; update the paired differential twin network model according to the determined mask generation model parameters.
  • the binary mask dictionary construction module 2556 is further configured to: take the object image sample pairs numbered for the same position in the training sample set as input samples, and pass the convolutional layer in the paired differential twin network model. Perform feature extraction on the input sample to obtain the first feature and the second feature corresponding to the object image sample and the object image sample that has been occluded; the first feature and the second feature are obtained through the mask generation model in the paired differential twin network model. Mask generation processing is performed on the absolute value of the difference of the feature to obtain a mask for the position number; the first feature and the second feature are respectively multiplied by the mask to obtain the input sample feature.
  • the binary mask dictionary construction module 2556 is further configured to: perform mask extraction on the object image sample pairs with the same position number through the paired differential twin network model to obtain a mask set corresponding to the position number; Normalize each mask in the mask set, and determine the average mask corresponding to the position number; use the occluded binary image block corresponding to the position number as the index of the binary mask dictionary, and calculate the average mask Perform binarization to generate a binary mask as an index item of the binary mask dictionary.
  • the artificial intelligence-based object recognition device 255 further includes: an object recognition model training module 2557 configured to: train based on a training sample set composed of an object image database to obtain features of pre-stored object images and to be recognized A basic object recognition model based on image characteristics; based on a set of training samples, an object recognition model used to determine the matching relationship between the image to be recognized and the pre-stored object image is trained; the object recognition model includes a basic object recognition model and a binary mask processing module .
  • the object recognition model training module 2557 is further configured to: initialize the fully connected layer of the object recognition model, and initialize the loss function including input samples, classification and recognition results, and parameters of the fully connected layer in the object recognition model;
  • the object recognition model performs the following processing during each iteration of the training process: the occlusion processed object image samples included in the training sample set and the corresponding binary mask in the binary mask dictionary are determined as input samples, and the object recognition model is used to The input sample is classified and recognized to obtain the classification and recognition result of the corresponding input sample; the input sample and the classification and recognition result are substituted into the loss function to determine the fully connected layer parameters in the object recognition model corresponding to the minimum value of the loss function;
  • the connection layer parameters update the object recognition model.
  • the artificial intelligence-based object recognition device 255 further includes: an affine transformation module 2558, configured to: detect the key points of the object to be recognized in the image to be recognized, and determine the coordinate positions of the key points; Coordinate position, the object to be recognized is subjected to affine transformation to align the key points to the standard template position consistent with the pre-stored object image.
  • an affine transformation module 2558 configured to: detect the key points of the object to be recognized in the image to be recognized, and determine the coordinate positions of the key points; Coordinate position, the object to be recognized is subjected to affine transformation to align the key points to the standard template position consistent with the pre-stored object image.
  • the embodiment of the application provides a computer-readable storage medium storing executable instructions, and the executable instructions are stored therein.
  • the processor will cause the processor to execute the artificial intelligence-based For example, the object recognition method based on artificial intelligence as shown in Fig. 4 and Figs. 5A-5D.
  • the computer-readable storage medium may be FRAM, ROM, PROM, EPROM, EEPROM, flash memory, magnetic surface memory, optical disk, or CD-ROM, etc.; it may also include one or any combination of the foregoing memories. Of various equipment.
  • the executable instructions may be in the form of programs, software, software modules, scripts or codes, written in any form of programming language (including compiled or interpreted languages, or declarative or procedural languages), and their It can be deployed in any form, including being deployed as an independent program or as a module, component, subroutine or other unit suitable for use in a computing environment.
  • executable instructions may but do not necessarily correspond to files in the file system, and may be stored as part of files that store other programs or data, for example, in a HyperText Markup Language (HTML, HyperText Markup Language) document
  • HTML HyperText Markup Language
  • One or more scripts in are stored in a single file dedicated to the program in question, or in multiple coordinated files (for example, a file storing one or more modules, subroutines, or code parts).
  • executable instructions can be deployed to be executed on one computing device, or on multiple computing devices located in one location, or on multiple computing devices that are distributed in multiple locations and interconnected by a communication network Executed on.
  • the performance of identifying non-occluded objects can be maintained.
  • the occluded area is the characteristic element of the object to be identified. The impact of the occlusion object is suppressed, so that the accuracy of the recognition of the occluded object is greatly improved, and its test performance in the real occlusion database and the synthetic occlusion database is higher than that in the related technology.
  • the electronic device distinguishes the occluded area and the unoccluded area in the image to be recognized, and obtains the binary mask of the occluded area in the image to be recognized, thereby performing image recognition based on the binary mask, the image to be recognized, and the pre-stored image Therefore, when the object to be recognized is occluded, the effect of the feature element of the object to be recognized by the occlusion area is suppressed, so that the accuracy of recognition of the occluded object is greatly improved.

Abstract

本申请提供了一种基于人工智能的对象识别方法、装置、设备及存储介质,涉及人工智能技术,方法包括:检测待识别图像的待识别对象的潜在的遮挡区域,获取表征待识别对象的遮挡区域以及未遮挡区域的二值图像;从二值图像中获取表征遮挡区域的遮挡二值图像块;基于遮挡二值图像块,查询二值掩码字典包括的遮挡二值图像块与二值掩码的映射关系,得到对应遮挡二值图像块的二值掩码;将基于每个遮挡二值图像块查询到的二值掩码进行合成,得到对应二值图像的二值掩码;基于对应二值图像的二值掩码、预存对象图像的特征以及待识别图像的特征,确定待识别图像与预存对象图像的匹配关系。

Description

对象识别方法、装置、电子设备及可读存储介质
相关申请的交叉引用
本申请基于申请号为201911013447.1、申请日为2019年10月23日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
技术领域
本申请涉及人工智能技术,尤其涉及一种基于人工智能的对象识别方法、装置、电子设备及计算机可读存储介质。
背景技术
人工智能(AI,Artificial Intelligence)是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法和技术及应用系统。
深度学习(DL,Deep Learning)是一门多领域交叉学科,涉及概率论、统计学、逼近论、凸分析、算法复杂度理论等多门学科。专门研究计算机怎样模拟或实现人类的学习行为,以获取新的知识或技能,重新组织已有的知识结构使之不断改善自身的性能。深度学习通常包括人工神经网络、置信网络、强化学习、迁移学习和归纳学习等技术。
随着近年来人工智能技术的发展,深度学习在人工智能技术中的对象识别领域已经处于支配地位,但是基于目前的深度学习算法,在对象被部分遮挡的情况下,算法识别性能也会遭遇严重下降。
申请内容
本申请实施例提供一种基于人工智能的对象识别方法、装置及计算机可读存储介质,能够保持识别非遮挡对象的识别准确率,且能够提高识别部分遮挡 对象的识别准确率。
本申请实施例的技术方案是这样实现的:
本申请实施例提供一种基于人工智能的对象识别方法,所述方法由电子设备执行,所述方法包括:
检测待识别图像的待识别对象的潜在的遮挡区域,以获取表征所述待识别对象的遮挡区域以及未遮挡区域的二值图像;
从所述二值图像中获取表征所述遮挡区域的遮挡二值图像块;
基于所述遮挡二值图像块,查询二值掩码字典中所包括的遮挡二值图像块与二值掩码的映射关系,得到对应所述遮挡二值图像块的二值掩码;
将基于每个所述遮挡二值图像块查询到的二值掩码进行合成,得到对应所述二值图像的二值掩码;
基于对应所述二值图像的二值掩码、预存对象图像的特征以及所述待识别图像的特征,确定所述待识别图像与所述预存对象图像的匹配关系。
本申请实施例提供一种基于人工智能的对象识别装置,包括:
遮挡检测模块,配置为检测待识别图像的待识别对象的潜在的遮挡区域,以获取表征所述待识别对象的遮挡区域以及未遮挡区域的二值图像;
遮挡二值图像块获取模块,配置为从所述二值图像中获取表征所述遮挡区域的遮挡二值图像块;
二值掩码查询模块,配置为基于所述遮挡二值图像块,查询二值掩码字典中所包括的遮挡二值图像块与二值掩码的映射关系,得到对应所述遮挡二值图像块的二值掩码;
二值掩码合成模块,配置为将基于每个所述遮挡二值图像块查询到的二值掩码进行合成,得到对应所述二值图像的二值掩码;
匹配关系确定模块,配置为基于对应所述二值图像的二值掩码、预存对象图像的特征以及所述待识别图像的特征,确定所述待识别图像与所述预存对象图像的匹配关系。
本申请实施例提供一种电子设备,所述电子设备包括:
存储器,配置为存储实现本申请实施例提供的基于人工智能的对象识别方法的可执行指令;
处理器,配置执行所述存储器中存储的所述可执行指令,以实现本申请实施例提供的基于人工智能的对象识别方法。
本申请实施例提供一种计算机存储介质,所述计算机存储介质中存储有可执行指令,所述计算机可执行指令被处理器执行时实现本申请实施例提供的基于人工智能的对象识别方法。
本申请实施例具有以下有益效果:
本申请实施例提供的基于人工智能的对象识别方法对待识别图像中遮挡区域与未遮挡区域进行区分,并获取待识别图像中遮挡区域的二值掩码,从而基于二值掩码、待识别图像及预存图像进行图像识别,从而实现了在待识别对象被遮挡的情况下,遮挡区域对待识别对象的特征元素的产生的影响被抑制,使得遮挡对象被识别的准确性大幅提高的技术效果。
附图说明
图1是相关技术中的通过掩膜网络进行遮挡识别的示意图;
图2是本申请实施例提供的基于人工智能的对象识别系统的应用场景示意图;
图3是本申请实施例提供的电子设备的结构示意图;
图4是本申请实施例提供的基于人工智能的对象识别方法的流程示意图;
图5A-5D是本申请实施例提供的基于人工智能的对象识别方法的流程示意图;
图6是本申请实施例提供的基于人工智能的对象识别系统的进行对象识别的流程示意图;
图7是本申请实施例提供的基于人工智能的对象识别方法的人脸图像的分割示意图;
图8是本申请实施例提供的基于人工智能的对象识别方法中的成对差分孪 生网络的结构示意图;
图9是本申请实施例提供的基于人工智能的对象识别方法的二值掩码字典中的每个索引项M j的计算流程示意图;
图10是本申请实施例提供的基于人工智能的对象识别方法中合成待识别人脸图像的二值掩码M的流程示意图;
图11是本申请实施例提供的基于人工智能的对象识别方法中特征提取的示意图;
图12是本申请实施例提供的基于人工智能的对象识别方法的模型构建示意图。
具体实施方式
为了使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请作进一步地详细描述,所描述的实施例不应视为对本申请的限制,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
在以下的描述中,涉及到“一些实施例”,其描述了所有可能实施例的子集,但是可以理解,“一些实施例”可以是所有可能实施例的相同子集或不同子集,并且可以在不冲突的情况下相互结合。
在以下的描述中,所涉及的术语“第一\第二”仅仅是是区别类似的对象,不代表针对对象的特定排序,可以理解地,“第一\第二”在允许的情况下可以互换特定的顺序或先后次序,以使这里描述的本申请实施例能够以除了在这里图示或描述的以外的顺序实施。
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中所使用的术语只是为了描述本申请实施例的目的,不是旨在限制本申请。
对本申请实施例进行进一步详细说明之前,对本申请实施例中涉及的名词和术语进行说明,本申请实施例中涉及的名词和术语适用于如下的解释。
1)卷积特征f(·):在本文中指的是卷积神经网络卷积层的输出,通常是具有C个通道,高为H,宽为W的三维张量,即f(·)∈R C*H*W
2)卷积特征元素:坐标为(c,h,w)的张量元素。
3)卷积特征的同一空间位置的特征元素:h维和w维坐标相同的C个通道的元素。
4)掩码:与顶层卷积特征大小相同的三维张量
深度学习在对象识别领域已经处于支配地位,然而,相关技术的深度学习算法在部分遮挡的条件下也会遭遇严重的性能下降。参见图1,图1是相关技术中的通过掩膜网络进行遮挡识别的示意图。在相关技术中,在基础卷积神经网络的中层嵌入一个掩膜网络模块,形成识别网络,该模块利用两层卷积直接从输入对象图像中学习一组权值M(i,j),通过卷积层对输入图像进行特征提取处理后,通过池化层进行最大池化处理,再通过卷积层对输入图像进行特征提取处理,再通过池化层进行最大池化处理,得到一组权值M(i,j),每个权值与基础卷积网络中层卷积特征的对应空间位置的特征相乘,通过端到端的训练学习,使该模块对有用的特征输出较高的权值,对被遮挡破坏的特征输出较低的权值,从而达到减弱遮挡影响的目的。
然而,该方案的掩膜网络模块分支对卷积特征上相同空间位置所有通道的特征元素输出相同的权值,即认为卷积特征每个通道的特征元素受到遮挡影响的情况是一致的,如图1所示,由原始特征U变换到加权之后的特征V,在通道维度上,特征元素并未经过不同加权处理,本申请经分析和实验验证发现,即使是对卷积特征上的同一空间位置,各个通道此位置的特征元素值在遮挡条件下的变化情况也是存在较大差异的,因此,相关技术的方案在原理上存在着漏洞,并且在对象识别系统的应用场景中,通常是计算一张待识别对象的特征与数据库中各对象特征之间的相似度,然后进行识别,图1所示的方案的思路仅仅是降低待识别遮挡对象特征中遮挡部分的影响,并没有解决计算待识别对象特征与数据库中对象特征相似度时存在的信息不一致性,例如对一张戴墨镜的待识别对象,该方案仅仅使墨镜部分尽可能少地影响测试对象的特征,而数 据库中的无遮挡对象在该方案的网络结构下仍会保留被墨镜遮挡的原始部分的特征,因此在计算相似度时该原始部分的区域仍会造成很强的不一致性影响,那么原始部分被遮挡的影响实际上还是存在的。
因而,本申请要解决的问题是:基于一般识别场景(无遮挡或少遮挡)下性能良好的深度卷积网络,提出一种对遮挡鲁棒的对象识别系统,从人眼的视觉经验出发,显式的找到任意遮挡条件下被破坏的卷积特征元素,并在对待识别对象进行识别时,将这些特征元素携带的干扰信息从计算相似度的步骤中剔除,确保识别是根据待识别对象中未被遮挡的部分进行的,符合人眼的视觉经验。
在本申请实施例中提出一种成对差分孪生网络结构,来显式地学习有遮挡区域与被遮挡破坏的特征元素之间的映射关系,基于此映射关系,建立一个二值掩码字典,字典中的每个索引项表示某块区域发生遮挡时,受影响大的特征元素,根据这一字典,能够得到任意遮挡条件下应该被去除的特征元素,在识别时抑制这些元素的响应值,从而实现对遮挡的鲁棒性。
本申请实施例提供一种基于人工智能的对象识别方法、装置、电子设备和计算机可读存储介质,能够在待识别对象被遮挡的情况下,抑制遮挡区域对待识别对象的特征元素所产生的影响,使得遮挡对象被识别的准确性大幅提高,下面说明本申请实施例提供的基于人工智能的对象识别设备的示例性应用,本申请实施例提供的电子设备可以实施为笔记本电脑,平板电脑,台式计算机,机顶盒,移动设备(例如,移动电话,便携式音乐播放器,个人数字助理,专用消息设备,便携式游戏设备)等各种类型的用户终端,也可以实施为服务器。下面,将说明设备实施为服务器时的示例性应用。
在一些实施例中,服务器200可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN、以及大数据和人工智能平台等基础云计算服务的云服务器。终端400可以是智能手机、平板电脑、笔记本电脑、台式计算机、智能音箱、智能 手表等,但并不局限于此。终端以及服务器可以通过有线或无线通信方式进行直接或间接地连接,本发明实施例中不做限制。
参见图2,图2是本申请实施例提供的基于人工智能的对象识别系统的应用场景示意图,对象识别系统100中还包括:终端400、网络300、服务器200以及数据库500,终端400通过网络300连接服务器200,网络300可以是广域网或者局域网,又或者是二者的组合,通过终端400的摄像头采集到待识别图像,响应于接收到终端400的对象识别请求,服务器200读取数据库500中预存的对象图像,并确定待识别图像与预存的对象图像的匹配关系,服务器200将确定的匹配关系作为对象识别结果返回给终端400的显示界面,以对其进行显示。
参见图3,图3是本申请实施例提供的电子设备的结构示意图,图3所示的服务器200包括:至少一个处理器210、存储器250、至少一个网络接口220和用户接口230。服务器200中的各个组件通过总线系统240耦合在一起。可理解,总线系统240用于实现这些组件之间的连接通信。总线系统240除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图3中将各种总线都标为总线系统240。
处理器210可以是一种集成电路芯片,具有信号的处理能力,例如通用处理器、数字信号处理器(DSP,Digital Signal Processor),或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等,其中,通用处理器可以是微处理器或者任何常规的处理器等。
用户接口230包括使得能够呈现媒体内容的一个或多个输出装置231,包括一个或多个扬声器和/或一个或多个视觉显示屏。用户接口230还包括一个或多个输入装置232,包括有助于用户输入的用户接口部件,比如键盘、鼠标、麦克风、触屏显示屏、摄像头、其他输入按钮和控件。
存储器250可以是可移除的,不可移除的或其组合。示例性的硬件设备包括固态存储器,硬盘驱动器,光盘驱动器等。存储器250包括在物理位置上远离处理器210的一个或多个存储设备。
存储器250包括易失性存储器或非易失性存储器,也可包括易失性和非易失性存储器两者。非易失性存储器可以是只读存储器(ROM,Read Only Me mory),易失性存储器可以是随机存取存储器(RAM,Random Access Memor y)。本申请实施例描述的存储器250旨在包括任意适合类型的存储器。
在一些实施例中,存储器250能够存储数据以支持各种操作,这些数据的示例包括程序、模块和数据结构或者其子集或超集,下面示例性说明。
操作系统251,包括用于处理各种基本系统服务和执行硬件相关任务的系统程序,例如框架层、核心库层、驱动层等,用于实现各种基础业务以及处理基于硬件的任务;
网络通信模块252,用于经由一个或多个(有线或无线)网络接口220到达其他计算设备,示例性的网络接口220包括:蓝牙、无线相容性认证(WiFi)、和通用串行总线(USB,Universal Serial Bus)等;
呈现模块253,用于经由一个或多个与用户接口230相关联的输出装置231(例如,显示屏、扬声器等)使得能够呈现信息(例如,用于操作外围设备和显示内容和信息的用户接口);
输入处理模块254,用于对一个或多个来自一个或多个输入装置232之一的一个或多个用户输入或互动进行检测以及翻译所检测的输入或互动。
在一些实施例中,本申请实施例提供的装置可以采用软件方式实现,图3示出了存储在存储器250中的基于人工智能的对象识别装置255,其可以是程序和插件等形式的软件,包括以下软件模块:遮挡检测模块2551、遮挡二值图像块获取模块2552、二值掩码查询模块2553、二值掩码合成模块2554、匹配关系确定模块2555、二值掩码字典构建模块2556、对象识别模型训练模块2557和仿射变换模块2558。这些模块是逻辑上的,因此根据所实现的功能可以进行任意的组合或进一步拆分,将在下文中说明各个模块的功能。
在另一些实施例中,本申请实施例提供的基于人工智能的对象识别装置可以采用硬件方式实现,作为示例,本申请实施例提供的基于人工智能的对象识别装置可以是采用硬件译码处理器形式的处理器,其被编程以执行本申请实施 例提供的基于人工智能的对象识别方法,例如,硬件译码处理器形式的处理器可以采用一个或多个应用专用集成电路(ASIC,Application Specific Integrated Circuit)、DSP、可编程逻辑器件(PLD,Programmable Logic Device)、复杂可编程逻辑器件(CPLD,Complex Programmable Logic Device)、现场可编程门阵列(FPGA,Field-Programmable Gate Array)或其他电子元件。
将结合本申请实施例提供的服务器的示例性应用和实施,说明本申请实施例提供的基于人工智能的对象识别方法。
下面分两个阶段说明本申请实施例提供的基于人工智能的对象识别方法,第一个部分是模型的训练阶段,第二个部分是利用模型的识别阶段。
下面说明模型的训练阶段,参见图4,图4是本申请实施例提供的基于人工智能的对象识别方法的流程示意图,将结合图4示出的步骤101-104进行说明,下述方法的步骤可以在上述任意类型的电子设备(例如终端或服务器)上实现。
在步骤101中,基于对象图像数据库,构建由针对不同位置编号的对象图像样本对组成的训练样本集合;其中,对象图像样本对包括对象图像样本和经过遮挡处理的对象图像样本。
这里的对象可以是人、动物或者是物品,对于人而言,遮挡识别可以基于用于进行人脸识别的对象识别模型进行,对于动物而言,遮挡识别可以基于用于进行动物面部识别的对象识别模型进行,可以识别出某种动物的品种或者不同动物的类别,对于物品而言,遮挡识别可以基于专门用于进行某类物品识别的对象识别模型进行。
在一些实施例中,在构建二值掩码字典之前,还可以先构建训练样本集合,训练样本集合的构建基础是对象图像数据库,在步骤101中基于对象图像数据库,构建由针对不同位置编号的对象图像样本对组成的训练样本集合,可以通过下述技术方案实现:获取对象图像数据库中的对象图像样本,并对对象图像样本进行均匀分割,以获取对应不同对象图像样本块的位置编号;针对位置编号在对象图像样本中对应的对象图像样本块,进行遮挡处理;将对象图像样本 以及经过遮挡处理的对象图像样本,构造为针对位置编号的对象图像样本对;基于不同位置编号的对象图像样本对,形成训练样本集合。
在一些实施例中,将对象图像样本进行均匀分割,例如,经过均匀分割后,形成12个对象图像样本块,对12个对象图像样本块进行对应的位置编号,每一个对象图像样本块对应一个位置编号,针对位置编号在对象图像样本中对应的对象图像样本块,进行遮挡处理,例如,对于位置编号11而言,在对应位置编号11的对象图像样本块上进行遮挡处理,得到一个对象图像样本对,这个对象图像样本对中包括未经遮挡处理的原始的对象图像样本和在对应的对象图像样本块进行遮挡处理之后的对象图像样本,针对于同一位置编号,可以构造多个对象图像样本对,虽然不同的对象图像样本对中的对象有区别,但是均是在同一位置进行了遮挡处理。
在步骤102中,基于基础对象识别模型以及掩码生成模型,构建成对差分孪生网络模型。
在一些实施例中,基于基础对象识别模型以及掩码生成模型,构建成对差分孪生网络模型,接着基于训练样本集合,训练成对差分孪生网络模型,基于经过训练的成对差分孪生网络模型,构建二值掩码字典,其中,二值掩码字典的索引是遮挡二值图像块,二值掩码字典的索引项是二值掩码。这里,成对差分孪生网络模型由两个完全相同的基础对象识别模型组成,基础对象识别模型的结构基础是卷积神经网络,将两个基础对象识别模型所提取出的特征的差值的绝对值作为注意力机制,使得掩码生成模型对绝对值进行处理,其实质是关注了那些被遮挡所影响到的特征元素,掩码生成模型由常见的神经网络单元构成,包括批归一层,卷积层等等。训练成对孪生差分网络的过程实际上是相当于训练掩码生成模型,其中的基础对象识别模型是经过训练的能够进行对象识别的一般模型,在训练成对差分孪生网络的过程中,基础对象识别模型的参数是固定的,仅训练更新掩码生成模型的参数。
在步骤103中,基于训练样本集合,训练成对差分孪生网络模型。
在一些实施例中,步骤103中基于训练样本集合,训练成对差分孪生网络 模型,具体通过如下技术方案实现,初始化成对差分孪生网络模型中的掩码生成模型,并初始化包括输入样本、输入样本特征、分类概率、以及掩码生成模型参数的损失函数;在成对差分孪生网络模型每次迭代训练过程中执行以下处理:将训练样本集合包括的对象图像样本对作为输入样本,通过成对差分孪生网络模型对输入样本进行特征提取,得到输入样本特征;通过对象识别模型对经过遮挡处理的对象图像样本进行分类识别,得到分类概率;将输入样本、输入样本特征和分类概率代入损失函数,以确定损失函数取得最小值时对应的成对差分孪生网络模型参数;根据所确定的掩码生成模型参数更新成对差分孪生网络模型。
在一些实施例中,训练样本集合中包括针对不同位置编号的对象图像样本对,通过针对某一位置编号的对象图像样本对,对成对差分孪生网络模型进行训练,训练后所得到的掩码生成模型即是针对于该位置编号的掩码生成模型,以通过掩码生成模型找出各个位置编号的图像块被遮挡时,对象图像的卷积特征中受遮挡影响大从而应该被抑制的特征元素。
在一些实施例中,对掩码生成模型进行初始化,并初始化损失函数中的输入样本,输出结果以及掩码生成模型的参数,这里的输出结果包括输入样本特征和分类概率。这里的输入样本是训练样本集合包括的对象图像样本对,在训练针对一个位置编号的掩码生成模型的过程中,利用针对相应位置编号的样本对进行训练,通过成对差分孪生网络模型对输入样本进行特征提取,得到输入样本特征,这里的输入样本特征是经过掩码生成模型处理后得到的。
在一些实施例中,将所述训练样本集合包括的对象图像样本对作为输入样本,通过所述成对差分孪生网络模型对所述输入样本进行特征提取,得到所述输入样本特征的过程具体可以通过如下技术方案实现,将训练样本集合中的针对同一位置编号的对象图像样本对作为输入样本,通过成对差分孪生网络模型中的卷积层对输入样本进行特征提取,得到分别对应对象图像样本和经过遮挡处理的对象图像样本的第一特征和第二特征;通过成对差分孪生网络模型中的掩码生成模型对第一特征和第二特征的差值的绝对值进行掩码生成处理,得到 针对位置编号的掩码;通过掩码分别对第一特征以及第二特征进行乘运算,得到输入样本特征。
这里的掩码生成模型由常见的神经网络单元构成,包括归一化层,卷积层等等,将卷积层得到的特征映射到[0,1]的范围内,通过掩码生成模型得到一个与卷积特征大小相同的掩码,即,针对位置编号的掩码是与第一特征以及第二特征大小相同的三维张量,掩码中的每个元素和第一特征以及第二特征的对应元素相乘,得到新的卷积特征作为输入样本特征。
在一些实施例中,通过对象识别模型对经过遮挡处理的对象图像样本进行分类识别,得到分类概率,这里的分类概率可以是分类正确的概率,这里通过分类正确的概率以及输入样本特征来对掩码生成模型进行修正更新,即将输入样本、输入样本特征和分类概率代入损失函数,以确定损失函数取得最小值时对应的成对差分孪生网络模型参数;根据所确定的掩码生成模型参数更新成对差分孪生网络模型。
在步骤104中,基于经过训练的成对差分孪生网络模型,构建二值掩码字典;其中,二值掩码字典的索引是遮挡二值图像块,二值掩码字典的索引项是二值掩码。
在一些实施例中,步骤104中基于经过训练的成对差分孪生网络模型,构建二值掩码字典,具体可以通过以下技术方案实现,通过成对差分孪生网络模型对同一位置编号的对象图像样本对进行掩码提取,得到对应位置编号的掩码集合;对掩码集合中的每个掩码进行归一化处理,并基于每个掩码的归一化结果计算平均值,以确定对应位置编号的平均掩码;将对应位置编号的遮挡二值图像块作为二值掩码字典的索引,并对平均掩码进行二值化,以将生成的二值掩码作为二值掩码字典的索引项。
在一些实施例中,通过已经训练好的成对差分孪生网络提取训练样本集合中针对一个位置编号的对象图像样本对的掩码集合,当训练样本集合中有N对对象图像样本对时,掩码集合中包括N个掩码,对N个掩码中的每个掩码进行归一化处理,并基于每个掩码的归一化结果计算平均值,以确定对应位置编号 的平均掩码,将对应位置编号的遮挡二值图像块作为二值掩码字典的索引,并对平均掩码进行二值化,以将生成的二值掩码作为二值掩码字典的索引项。
作为示例,在平均掩码中,掩码值越小代表着对应的卷积特征元素被抑制得越多,针对于任意一个位置编号,将平均掩码中最小的τ*K个掩码值对应的卷积特征元素看作是被遮挡破坏的部分,τ是[0,1]范围内的实数,可以为0.25;K是任意一个位置编号的平均掩码的元素总数,也是顶层卷积特征的元素总数,K=C*H*W,C为通道数,H为高度,W为宽度,由于经分析和实验验证发现,即使是对卷积特征上的同一空间位置,各个通道此位置的特征元素值在遮挡条件下的变化情况也是存在较大差异的,因此,由平均掩码
Figure PCTCN2020117764-appb-000001
得到二值掩码字典的索引项M j的过程实际上是基于顶层卷积特征的K个元素(每个空间位置的每个通道的元素)进行的,二值化的方式如下:
Figure PCTCN2020117764-appb-000002
M j[k]=1,else         (1);
其中,k表示第k个掩码值,
Figure PCTCN2020117764-appb-000003
表示平均掩码中最小的τ*K个掩码值。
在一些实施例中,在执行对象识别之前,还可以执行下述技术方案,基于由对象图像数据库构成的训练样本集合,训练用于获取预存对象图像的特征以及待识别图像的特征的基础对象识别模型;基于训练样本集合,训练用于确定待识别图像与预存对象图像的匹配关系的对象识别模型;其中,对象识别模型包括基础对象识别模型以及二值掩码处理模块。
在一些实施例中,上述技术方案中的基于训练样本集合,训练用于确定待识别图像与预存对象图像的匹配关系的对象识别模型,可以通过下述技术方案实现,初始化对象识别模型的全连接层,并初始化包括输入样本、分类识别结果、以及对象识别模型中全连接层参数的损失函数;在对象识别模型每次迭代训练过程中执行以下处理:将训练样本集合包括的经过遮挡处理的对象图像样本以及在二值掩码字典中对应的二值掩码作为输入样本,通过对象识别模型对 输入样本进行分类识别,得到对应输入样本的分类识别结果;将输入样本和分类识别结果代入损失函数,以确定损失函数取得最小值时对应的对象识别模型中全连接层参数;根据所确定的全连接层参数更新对象识别模型。
下面说明本申请实施例提供的基于人工智能的对象识别方法的识别阶段。
参见图5A,图5A是本申请实施例提供的基于人工智能的对象识别方法的一个可选的流程示意图,将结合图5A示出的步骤201-205进行说明,下述方法的步骤可以在上述任意类型的电子设备(例如终端或服务器)上实现。
在步骤201中,检测待识别图像的待识别对象的潜在的遮挡区域,以获取表征待识别对象的遮挡区域以及未遮挡区域的二值图像。
这里,待识别对象的潜在的遮挡区域表征待识别对象可以被遮挡,也可以不被遮挡,在获取的二值图像中,用0表示非遮挡像素,1代表遮挡像素。通过全卷积神经网络结构对待识别图像的待识别对象进行遮挡检测,这里的全卷积网络结构经由基于人工合成的遮挡数据以及自行标注的真实遮挡数据的训练得到。
在步骤202中,从二值图像中获取表征遮挡区域的遮挡二值图像块。
参见图5B,基于图5A,步骤202中从二值图像中获取表征遮挡区域的遮挡二值图像块,可以通过下述步骤2021-2023实现。
在步骤2021中,将二值图像分割为多个二值图像块。
在步骤2022中,确定分割得到的每个二值图像块中遮挡像素的数目比例。
在步骤2023中,当遮挡像素的数目比例超过数目比例阈值时,将对应的二值图像块确定为表征遮挡区域的遮挡二值图像块。
在一些实施例中,对二值图像进行均匀分割得到多个二值图像块,例如将二值图像分割成25个二值图像块,每一行有5个图像块,每一列也有5个图像块,每个图像块的大小相同,同时,每个二值图像块均带有各自的位置编号,例如,第一行第二个位置的图像块可以被编号为12,第三行第四个位置的图像块可以被编号为34。
在一些实施例中,对每个二值图像进行遮挡判断,以确定所有二值图像块 表征遮挡区域的遮挡二值图像块,有些二值图像块中有部分遮挡像素,但是二值图像块中的遮挡像素的占比较少,则不将这些二值图像块判断为遮挡二值图像块,对于二值图像块中遮挡像素的占比超过数目比例阈值的二值图像块,将这些二值图像块判断为遮挡二值图像块,即首先确定分割得到的每个二值图像块中遮挡像素的数目比例,当遮挡像素的数目比例超过数目比例阈值时,将对应的二值图像块确定为表征遮挡区域的遮挡二值图像块。
在步骤203中,基于获取的遮挡二值图像块,查询二值掩码字典中所包括的遮挡二值图像块与二值掩码的映射关系,得到对应遮挡二值图像块的二值掩码。
参见图5C,基于图5A,步骤203中查询二值掩码字典中所包括的遮挡二值图像块与二值掩码的映射关系,可以通过下述步骤2031-2032实现。
在步骤2031中,获取对应遮挡二值图像块的位置编号。
在步骤2032中,基于对应遮挡二值图像块的位置编号,在二值掩码字典中查询遮挡二值图像块的位置编号与二值掩码的映射关系。
在一些实施例中,这里的位置编号即为上文所述的位置编号,在二值掩码字典中记录了各个遮挡二值图像块和二值掩码M的映射关系,由于遮挡二值图像块和各自的位置编号是一一对应的,所以通过查询位置编号各个遮挡二值图像块的位置编号和二值掩码的映射关系,可以获得对应遮挡二值图像块的二值掩码。二值掩码可以表征出对应的遮挡二值图像块所影响的卷积特征元素,受到影响较大的卷积特征元素可以被二值掩码中的0值抑制,受到影响较大的卷积特征元素可以通过二值掩码中的1值保留。
在步骤204中,将基于每个遮挡二值图像块查询到的二值掩码进行合成,得到对应二值图像的二值掩码。
在一些实施例中,对针对每个遮挡二值图像块查询到的二值掩码进行合成,这里的合成可以是或逻辑运算,例如,对于对应编号为12、13和14的遮挡二值图像块,所查询到的二值掩码分别为M 12、M 13和M 14,对上述所查询到的二值掩码进行如下所示的或运算:
Figure PCTCN2020117764-appb-000004
其中,
Figure PCTCN2020117764-appb-000005
表示逻辑求或运算。
在步骤205中,基于对应二值图像的二值掩码、预存对象图像的特征以及待识别图像的特征,确定待识别图像与预存对象图像的匹配关系。
参见图5D,基于图5A,步骤205中基于对应二值图像的二值掩码、预存对象图像的特征以及待识别图像的特征,确定待识别图像与预存对象图像的匹配关系,可以通过下述步骤2051-2053实现。
在步骤2051中,确定预存对象图像的特征以及待识别图像的特征。
在步骤2052中,将二值掩码分别与预存对象图像的特征以及待识别图像的特征进行乘运算,得到对应预存对象图像的预存特征以及对应待识别图像的待识别特征。
在步骤2053中,确定预存特征与待识别特征之间的相似度,当相似度超过相似度阈值时,确定待识别图像包括的对象与预存对象图像包括的对象属于相同类别。
在一些实施例中,确定预存对象图像的特征以及待识别图像的特征,将二值掩码分别与预存对象图像的特征以及待识别图像的特征进行乘运算,得到对应预存对象图像的预存特征以及对应待识别图像的待识别特征,确定预存特征与待识别特征之间的相似度,当相似度超过相似度阈值时,确定待识别图像包括的对象与预存对象图像包括的对象属于相同类别。
在一些实施例中,通过基础对象识别模型对预存对象图像和待识别图像分别进行特征提取,确定预存对象图像的预存特征以及待识别图像的待识别特征,通过对象识别模型中的二值掩码处理模块,将二值掩码分别与预存对象图像的预存特征以及待识别图像的待识别特征进行乘运算,以分别获得预存对象图像的预存特征以及对应待识别图像的待识别特征。
在一些实施例中,计算预存特征与待识别特征之间的余弦相似度,由于在特征提取阶段,将预存的干净的无遮挡的对象图像中提取出的特征也和二值掩 码相乘,因此,能够确保相似度的计算是根据待识别图像中的对象的未遮挡部分进行的,以及对应于干净的无遮挡的对象图像中的相应的部分进行的,例如,对于被遮挡了眼睛部分的人脸而言,相似度的计算均是基于除了人眼的其他部分进行的,即便是对于预存的干净的人脸图像而言,所提取出的最终的特征,仍然是除人眼以外的部分,能够确保待识别图像与预存对象图像保留相似的信息量,当相似度超过相似度阈值时,确定待识别图像包括的对象与预存对象图像包括的对象属于相同类别。
在一些实施例中,在执行步骤201之前,还可以执行下述技术方案,检测待识别图像中待识别对象的关键点,并确定关键点的坐标位置;根据关键点的坐标位置,对待识别对象进行仿射变换,以将关键点对齐到与预存对象图像一致的标准模板位置,在仿射变换的过程中,将待识别对象的关键点仿射变换到标准模板位置,从而减少由于待识别对象的位置姿势不同导致的识别误差。
下面,将说明本申请实施例在一个实际的应用场景中的示例性应用。
本申请中的对象识别方法可以应用于任意的人脸识别场景中,例如考勤系统、监控寻人系统、安检系统、手机电脑解锁等等。用户只需要在系统初始化时在系统中注册一张正面无遮挡人脸图像存在系统数据库中,作为预存人脸图像,在进行识别时只需获取用户的待识别图像即可。
参见图6,图6是本申请实施例提供的基于人工智能的对象识别系统的进行对象识别的流程示意图。
预处理模块进行人脸检测和对齐处理,通过预处理模块对输入的待识别人脸图像601进行预处理,在预处理过程中,首先,检测输入的待识别人脸图像中的人脸,并对左眼、右眼、鼻子、左嘴角和右嘴角的坐标位置进行定位,接着,根据五个关键点的坐标位置,将输入的待识别人脸图像中的人脸通过仿射变换,对齐到统一的模板位置并裁剪成固定大小,由此获得对齐后的待识别人脸图像602。
遮挡检测模块检测遮挡区域,通过遮挡检测模块对待识别的人脸图像进行遮挡检测,检测待识别人脸图像上发生部分遮挡的区域,输出与待识别人脸图 像大小相同的二值图像603,其中,0代表非遮挡像素,1代表遮挡像素。
这里所使用到的人脸关键点检测基于多任务卷积神经网络实现,这里所使用到的遮挡检测基于全卷积网络结构实现,训练样本包括人工合成的遮挡数据和自行标注的真实遮挡数据。
掩码生成模块生成二值掩码M,这里的掩码生成模块即为顶层卷积特征掩码生成模块,通过掩码生成模块接收待识别人脸图像的经过遮挡检测的检测结果,从二值掩码字典中合成待识别人脸图像的二值掩码M。
识别模块提取特征并进行人脸鉴别或认证,通过识别模块中的特征提取模块,利用基础卷积神经网络和待识别图像的二值掩码分别提取对齐后的待识别人脸图像和系统数据库中预存人脸图像的特征;通过识别模块中的分类模块,根据获取的待识别人脸图像的特征以及预存人脸图像的特征对待识别人脸图像进行识别,在人脸认证的应用场景中,输出结果表征待识别人脸图像与系统数据库中的预存人脸图像是否为同一个人,对于人脸鉴别的应用场景中,输出结果表征待识别人脸图像所属的系统数据库中人脸图像的类别,即输出待识别人脸类别与预存人脸是否属于同一类别。
下面详细说明构建上述二值掩码字典的过程,该字典中的索引是人脸图像的遮挡块,索引项是二值掩码,该字典是针对一个基础人脸识别模型生成的,例如主干卷积神经网络(Trunk CNN,Convolutional Neural Networks),这个Trunk CNN也是识别模块所使用的模型。
构建二值掩码字典分两个步骤:掩码生成模型的训练以及基于训练好的掩码生成模型建立一个二值化的掩码字典,即二值掩码字典。
在掩码生成模型的训练过程中,首先,根据人脸对齐的模板,将人脸区域划分为B*B个不重叠的区域,表示为
Figure PCTCN2020117764-appb-000006
参见图7,图7是本申请实施例提供的基于人工智能的对象识别方法的人脸图像的分割示意图,例如,将人脸图像划分为5*5块,针对每个人脸图像块训练一个掩码生成模型(MG,Mask G enerator),这里的掩码生成模型即为前述的掩码生成模型,每个MG的目的是 找出,在人脸上某块b j被遮挡时,该人脸图像的顶层卷积特征中受遮挡影响大从而应该减弱其响应值的元素,本申请实施例提供一种成对差分孪生网络(PD SN,Pairwise Differential Siamese Network)结构来学习每个MG。
参见图8,图8是本申请实施例提供的基于人工智能的对象识别方法中的成对差分孪生网络的结构示意图,成对差分孪生网络由两个相同Trunk CNN构成,训练第j个MG时,PDSN网络整体的输入是成对的人脸图像
Figure PCTCN2020117764-appb-000007
x i表示干净无遮挡的人脸,
Figure PCTCN2020117764-appb-000008
表示有遮挡的人脸,N表示人脸对的对数。
Figure PCTCN2020117764-appb-000009
与x i属于同一个类别人脸图像,唯一的区别是
Figure PCTCN2020117764-appb-000010
人脸上的b j块被遮挡。成对人脸图像通过共享的Trunk CNN分别提取各自的顶层卷积特征
Figure PCTCN2020117764-appb-000011
将二者的顶层卷积特征差值的绝对值
Figure PCTCN2020117764-appb-000012
作为MG的输入,差值输入起到注意力机制的作用,使得MG关注那些被遮挡改变了的特征元素。
成对差分孪生网络中的核心模块是MG,MG由常见的神经网络单元构成,包括批归一化层(BN,Batch Normalization)、卷积层(Convolutional layer)等,最后通过逻辑回归激活函数将MG的输出值映射到[0,1]范围内,该MG输出一个与顶层卷积特征相同大小的掩码
Figure PCTCN2020117764-appb-000013
其是与顶层卷积特征大小相同的三维张量,掩码中的每个元素与原顶层卷积特征的对应元素相乘,得到新的卷积特征
Figure PCTCN2020117764-appb-000014
这里的卷积特征f(·)指的是卷积神经网络卷积层的输出,通常是具有C个通道,高为H,宽为W的三维张量,即f(·)∈R C*H*W,这里的卷积特征元素指的是坐标为(c,h,w)的张量元素,这里的卷积特征的同一空间位置的特征元素指的是h维和w维坐标相同的C个通道的元素。
在训练过程中的损失函数由两种损失函数联合构成,包括分类损失函数l cls和对比损失函数l diff,分类损失函数的目的在于,遮挡人脸的顶层卷积特征与掩码相乘后的新特征
Figure PCTCN2020117764-appb-000015
能够提高Trunk CNN分类器的识别率,由此使得MG对阻碍识别的特征元素赋予较低的掩码值;对比损失函数的目的在于,使遮挡人脸的新特征
Figure PCTCN2020117764-appb-000016
与其对应的干净人脸的卷积特征
Figure PCTCN2020117764-appb-000017
尽可能的接近,由此使 得MG对二者差异较大的特征元素赋予较低的掩码值,两个损失函数的共同作用能够促使MG对遮挡人脸卷积特征与干净人脸卷积特征中差异较大且影响识别的元素赋予低的掩码值,这些元素就是本方案所关心的被遮挡破坏了的元素,由此,损失函数为构造为:
Figure PCTCN2020117764-appb-000018
其中,
Figure PCTCN2020117764-appb-000019
表示MG的输出,F表示Trunk CNN顶层卷积层后面的全连接层或者平均池化层,
Figure PCTCN2020117764-appb-000020
表示Trunk CNN分类正确的概率。
在本申请实施例中,将人脸区域划分为B*B个不重叠的区域,因此一共需要训练B*B个MG,这些MG的Trunk CNN部分都相同且参数固定,它们的训练数据来自同一数据库。
在基于训练好的掩码生成模型建立一个二值化的掩码字典的过程中,在各个MG的训练阶段完成后,通过MG的输出得到人脸图像上各块被遮挡时,该人脸图像的顶层卷积特征中被遮挡破坏从而应该减弱其响应值的元素,被减弱的元素对应着MG输出中较低的值。二值掩码字典的索引是人脸块b j,索引项是一个二值掩码M j,掩码M j与Trunk CNN的顶层卷积特征大小相同,M j中的0值代表着人脸块b j被遮挡时应该被从识别中去除的特征元素。
参见图9,图9是本申请实施例提供的基于人工智能的对象识别方法的二值掩码字典中的每个索引项M j的计算流程示意图。
在步骤901中:将多个人脸图像样本对输入上述经过训练的PDSN,得到一系列MG的输出掩码集合,这个掩码集合可以为
Figure PCTCN2020117764-appb-000021
N代表样本对的数目,j代表针对位置编号为j的MG输出的掩码,这里的人脸图像样本对和上述训练MG过程中所使用的训练样本可以相同。
在步骤902中:对步骤901中生成的掩码集合中的每个掩码进行归一化处理,例如对于
Figure PCTCN2020117764-appb-000022
其对应的归一化公式为:
Figure PCTCN2020117764-appb-000023
其中,max()为样本数据的最大值,min()为样本数据的最小值。
在步骤903中:计算归一化后的这些掩码的均值,得到该第j个MG对应的平均掩码:
Figure PCTCN2020117764-appb-000024
在步骤904中:对平均掩码进行二值化得到二值的字典索引项,这里的二值的字典索引项即为二值掩码M j
在平均掩码中,掩码值越小代表着对应的卷积特征元素被抑制得越多,据此本申请实施例中将平均掩码中最小的τ*K个掩码值对应的卷积特征元素看作是被遮挡破坏的部分(τ是[0,1]范围内的实数,可以为0.25;K是掩码的元素总数,也是顶层卷积特征的元素总数,K=C*H*W),则由平均掩码
Figure PCTCN2020117764-appb-000025
得到二值掩码字典的索引项M j的方式为:
Figure PCTCN2020117764-appb-000026
M j[k]=1,else        (5);
其中k表示第k个掩码值,
Figure PCTCN2020117764-appb-000027
表示平均掩码中最小的τ*K个掩码值。
按照图9所述的流程,对每个MG都生成其对应的二值掩码,由此构建了遮挡人脸图像块与二值掩码对应的二值掩码字典:
Figure PCTCN2020117764-appb-000028
这里的字典即为遮挡块-掩码字典。
下面描述基于二值掩码字典,合成待识别人脸图像的二值掩码M的过程。参见图10,图10是本申请实施例提供的基于人工智能的对象识别方法中合成待识别人脸图像的二值掩码M的流程示意图。
在步骤1001中:根据输入的待识别人脸图像的遮挡检测结果,确定遮挡人脸图像块,遮挡检测结果是与待识别人脸图像大小相同的二值图像,0代表非遮挡像素,1代表遮挡像素,当遮挡检测结果中某个人脸图像块范围内值为1 的像素数目大于该人脸图像块范围内总像素数目的一半时,该人脸图像块即被确定是发生了遮挡的遮挡人脸图像块。
在步骤1002中:从二值掩码字典中查询遮挡人脸图像块的索引项,合成该待识别人脸的二值掩码M,这里的索引项即为M j,以图6示的人脸图像为例,当人脸图像被分为5*5块时,在步骤1001中所确定的发生遮挡的人脸块为b 12,b 13,b 14,根据训练阶段建立的二值掩码字典,得到该待识别人脸图像对应的二值掩码为:
Figure PCTCN2020117764-appb-000029
其中
Figure PCTCN2020117764-appb-000030
表示逻辑求或运算。
参见图11,图11是本申请实施例提供的基于人工智能的对象识别方法中特征提取的示意图。
特征提取阶段所使用的Trunk CNN与构建字典阶段的参数完全相同,结构上多了一个输入二值掩码M的分支,即在基础对象识别模型上多了一个输入二值掩码M的分支,为了使Trunk CNN顶层卷积层之后的全连接层适应二值化的掩码,通过任意遮挡的人脸样本及其二值掩码微调全连接层的参数,全连接层以前的所有参数保持不变,此微调阶段采用很小的学习率1e -4,可以完成6次训练,损失函数采用与训练Trunk CNN时相同的分类损失函数。
在实际应用中,数据库中可直接存储人脸图像的顶层卷积特征,识别待识别人脸图像时,将掩码M分别与待识别人脸图像的顶层卷积特征和数据库中的顶层卷积特征相乘,然后通过Trunk CNN微调过的全连接层或平均池化层得到分类时所用的最终的特征向量。
在提取特征向量后,计算待识别人脸图像(图11中的测试人脸)的特征向量f p与数据库中各人脸图像(图11中的数据库人脸)的特征向量
Figure PCTCN2020117764-appb-000031
的余弦相似度:
Figure PCTCN2020117764-appb-000032
其中,s(p,g i)即为特征向量f p与数据库中各人脸图像的特征向量
Figure PCTCN2020117764-appb-000033
余弦相似度。
在特征提取阶段,将数据库中干净无遮挡人脸的特征也与掩码M相乘,能够确保相似度的计算是根据待识别人脸图像中未被遮挡的部分进行的,即待识别人脸图像与数据库中人脸图像特征保留相似的信息量。
对于人脸鉴别场景中,需要识别出待识别人脸图像属于数据库中哪个人脸类别,可以采用最近邻分类器,即与测试人脸相似度最高的数据库中人脸图像的类别即为该待识别人脸所属的类别,也可以采用其他常用的分类器。
对于人脸认证场景,需要识别出待识别人脸图像与数据库中人脸图像是否属于同一个人,可以采用阈值判断的方式,即二者的相似度高于某阈值时即认为是同一个人,反之认为不是同一个人,也可以根据特征向量专门训练一个用于人脸认证的分类器。
参见图12,图12是是本申请实施例提供的基于人工智能的对象识别方法的模型构建示意图。
在本申请实施例提供的系统架构中,除了需要对特征提取模块中的全连接层参数进行微调,以及建立二值掩码字典外,还可以训练一个基础的人脸识别模型,训练样本的来源数据库不被限制,可以用常见的人脸公开数据库,也可以是用户自己的私有数据库,只要保证训练数据的预处理过程与前述预处理过程相同。本申请实施例提供的对象识别方法的模型训练过程如下,在步骤1201中,用人脸数据库训练一个基础对象识别模型,在步骤1202中,固定基础对象识别模型参数,用(干净,遮挡)人脸样本对训练B*B个成对差分孪生网络模型,建立二值化的遮挡块-掩码字典,在步骤1203中,固定Trunk CNN全连接层之前的参数,用任意遮挡的人脸及其对应掩码微调Trunk CNN的全连接层参数。
下面继续说明本申请实施例提供的基于人工智能的对象识别装置255的实施为软件模块的示例性结构,在一些实施例中,如图3所示,存储在存储器250的基于人工智能的对象识别装置255中的软件模块可以包括:遮挡检测模块2551,配置为检测待识别图像的待识别对象的潜在的遮挡区域,以获取表征待识别对象的遮挡区域以及未遮挡区域的二值图像;遮挡二值图像块获取模块2552, 配置为从二值图像中获取表征遮挡区域的遮挡二值图像块;二值掩码查询模块2553,配置为基于遮挡二值图像块,查询二值掩码字典中所包括的遮挡二值图像块与二值掩码的映射关系,得到对应遮挡二值图像块的二值掩码;二值掩码合成模块2554,配置为将基于每个遮挡二值图像块查询到的二值掩码进行合成,得到对应二值图像的二值掩码;匹配关系确定模块2555,配置为基于对应二值图像的二值掩码、预存对象图像的特征以及待识别图像的特征,确定待识别图像与预存对象图像的匹配关系。
在一些实施例中,遮挡二值图像块获取模块2552,还配置为:将二值图像分割为多个二值图像块;确定分割得到的每个二值图像块的遮挡像素的数目比例;当遮挡像素的数目比例超过数目比例阈值时,将二值图像块确定为表征遮挡区域的遮挡二值图像块。
在一些实施例中,二值掩码查询模块2553,还配置为:获取对应遮挡二值图像块的位置编号;基于对应遮挡二值图像块的位置编号,在二值掩码字典中查询遮挡二值图像块的位置编号与二值掩码的映射关系。
在一些实施例中,匹配关系确定模块2555,还配置为:确定预存对象图像的特征以及待识别图像的特征;将二值掩码与预存对象图像的特征以及待识别图像的特征分别相乘,得到对应预存对象图像的预存特征以及对应待识别图像的待识别特征;确定预存特征与待识别特征之间的相似度,当相似度超过相似度阈值时,确定待识别图像包括的对象与预存对象图像包括的对象属于相同类别。
在一些实施例中,基于人工智能的对象识别装置255还包括:二值掩码字典构建模块2556,配置为:基于对象图像数据库,构建由针对不同位置编号的对象图像样本对组成的训练样本集合;其中,对象图像样本对包括对象图像样本和经过遮挡处理的对象图像样本;基于基础对象识别模型以及掩码生成模型,构建成对差分孪生网络模型;基于训练样本集合,训练成对差分孪生网络模型;基于经过训练的成对差分孪生网络模型,构建二值掩码字典;其中,二值掩码字典的索引是遮挡二值图像块,二值掩码字典的索引项是二值掩码。
在一些实施例中,二值掩码字典构建模块2556,还配置为:获取对象图像数据库中的对象图像样本,并对对象图像样本进行均匀分割,以获取对应不同对象图像样本块的位置编号;针对位置编号,对对象图像样本进行对应对象图像样本块的遮挡处理;将对象图像样本以及经过遮挡处理的对象图像样本,构造为针对位置编号的对象图像样本对;基于针对不同位置编号的对象图像样本对,形成训练样本集合。
在一些实施例中,二值掩码字典构建模块2556,还配置为:初始化成对差分孪生网络模型中的掩码生成模型,并初始化包括输入样本、输入样本特征、分类概率、以及掩码生成模型参数的损失函数;在成对差分孪生网络模型每次迭代训练过程中执行以下处理:将训练样本集合包括的对象图像样本对作为输入样本,通过成对差分孪生网络模型对输入样本进行特征提取,得到输入样本特征;通过对象识别模型对经过遮挡处理的对象图像样本进行分类识别,得到分类概率;将输入样本、输入样本特征和分类概率代入损失函数,以确定损失函数取得最小值时对应的成对差分孪生网络模型参数;根据所确定的掩码生成模型参数更新成对差分孪生网络模型。
在一些实施例中,二值掩码字典构建模块2556,还配置为:将训练样本集合中的针对同一位置编号的对象图像样本对作为输入样本,通过成对差分孪生网络模型中的卷积层对输入样本进行特征提取,得到分别对应对象图像样本和经过遮挡处理的对象图像样本的第一特征和第二特征;通过成对差分孪生网络模型中的掩码生成模型对第一特征和第二特征的差值的绝对值进行掩码生成处理,得到针对位置编号的掩码;通过掩码分别对第一特征以及第二特征进行相乘运算,得到输入样本特征。
在一些实施例中,二值掩码字典构建模块2556,还配置为:通过成对差分孪生网络模型对针对同一位置编号的对象图像样本对进行掩码提取,得到对应位置编号的掩码集合;对掩码集合中的每个掩码进行归一化处理,并确定对应位置编号的平均掩码;将对应位置编号的遮挡二值图像块作为二值掩码字典的索引,并对平均掩码进行二值化,以生成二值掩码作为二值掩码字典的索引项。
在一些实施例中,基于人工智能的对象识别装置255还包括:对象识别模型训练模块2557,配置为:基于由对象图像数据库构成的训练样本集合,训练用于获取预存对象图像的特征以及待识别图像的特征的基础对象识别模型;基于训练样本集合,训练用于确定待识别图像与预存对象图像的匹配关系的对象识别模型;其中,对象识别模型包括基础对象识别模型以及二值掩码处理模块。
在一些实施例中,对象识别模型训练模块2557,还配置为:初始化对象识别模型的全连接层,并初始化包括输入样本、分类识别结果、以及对象识别模型中全连接层参数的损失函数;在对象识别模型每次迭代训练过程中执行以下处理:将训练样本集合包括的经过遮挡处理的对象图像样本以及在二值掩码字典中对应的二值掩码确定为输入样本,通过对象识别模型对输入样本进行分类识别,得到对应输入样本的分类识别结果;将输入样本和分类识别结果代入损失函数,以确定损失函数取得最小值时对应的对象识别模型中全连接层参数;根据所确定的全连接层参数更新对象识别模型。
在一些实施例中,基于人工智能的对象识别装置255还包括:仿射变换模块2558,配置为:检测待识别图像中待识别对象的关键点,并确定关键点的坐标位置;根据关键点的坐标位置,对待识别对象进行仿射变换,以将关键点对齐到与预存对象图像一致的标准模板位置。
本申请实施例提供一种存储有可执行指令的计算机可读存储介质,其中存储有可执行指令,当可执行指令被处理器执行时,将引起处理器执行本申请实施例提供的基于人工智能的对象识别方法,例如,如图4和图5A-5D示出的基于人工智能的对象识别方法。
在一些实施例中,计算机可读存储介质可以是FRAM、ROM、PROM、EP ROM、EEPROM、闪存、磁表面存储器、光盘、或CD-ROM等存储器;也可以是包括上述存储器之一或任意组合的各种设备。
在一些实施例中,可执行指令可以采用程序、软件、软件模块、脚本或代码的形式,按任意形式的编程语言(包括编译或解释语言,或者声明性或过程性语言)来编写,并且其可按任意形式部署,包括被部署为独立的程序或者被 部署为模块、组件、子例程或者适合在计算环境中使用的其它单元。
作为示例,可执行指令可以但不一定对应于文件系统中的文件,可以可被存储在保存其它程序或数据的文件的一部分,例如,存储在超文本标记语言(HTML,Hyper Text Markup Language)文档中的一个或多个脚本中,存储在专用于所讨论的程序的单个文件中,或者,存储在多个协同文件(例如,存储一个或多个模块、子程序或代码部分的文件)中。
作为示例,可执行指令可被部署为在一个计算设备上执行,或者在位于一个地点的多个计算设备上执行,又或者,在分布在多个地点且通过通信网络互连的多个计算设备上执行。
综上所述,通过本申请实施例,在待识别对象未被遮挡时,能够保持其识别非遮挡对象的性能,同时,在待识别对象被遮挡的情况下,遮挡区域对待识别对象的特征元素的产生的影响被抑制,使得遮挡对象被识别的准确性大幅提高,其在真实遮挡数据库和合成遮挡数据库中的测试性能均高于相关技术中的方案。
以上所述,仅为本申请的实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和范围之内所作的任何修改、等同替换和改进等,均包含在本申请的保护范围之内。
工业实用性
本申请实施例中电子设备对待识别图像中遮挡区域与未遮挡区域进行区分,并获取待识别图像中遮挡区域的二值掩码,从而基于二值掩码、待识别图像及预存图像进行图像识别,从而实现了在待识别对象被遮挡的情况下,遮挡区域对待识别对象的特征元素所产生的影响被抑制,使得遮挡对象被识别的准确性大幅提高的技术效果。

Claims (15)

  1. 一种基于人工智能的对象识别方法,所述方法由电子设备执行,所述方法包括:
    检测待识别图像的待识别对象的潜在的遮挡区域,以获取表征所述待识别对象的遮挡区域以及未遮挡区域的二值图像;
    从所述二值图像中获取表征所述遮挡区域的遮挡二值图像块;
    基于所述遮挡二值图像块,查询二值掩码字典中所包括的遮挡二值图像块与二值掩码的映射关系,得到对应所述遮挡二值图像块的二值掩码;
    将基于每个所述遮挡二值图像块查询到的二值掩码进行合成,得到对应所述二值图像的二值掩码;
    基于对应所述二值图像的二值掩码、预存对象图像的特征以及所述待识别图像的特征,确定所述待识别图像与所述预存对象图像的匹配关系。
  2. 根据权利要求1所述的方法,其中,所述从所述二值图像中获取表征所述遮挡区域的遮挡二值图像块,包括:
    将所述二值图像分割为多个二值图像块;
    确定分割得到的每个二值图像块中遮挡像素的数目比例;
    当所述遮挡像素的数目比例超过数目比例阈值时,将对应的二值图像块确定为表征所述遮挡区域的遮挡二值图像块。
  3. 根据权利要求1所述的方法,其中,所述查询二值掩码字典中所包括的遮挡二值图像块与二值掩码的映射关系,包括:
    获取对应所述遮挡二值图像块的位置编号;
    基于对应所述遮挡二值图像块的位置编号,在所述二值掩码字典中查询所述遮挡二值图像块的位置编号与二值掩码的映射关系。
  4. 根据权利要求1所述的方法,其中,所述基于对应所述二值图像的二值掩码、预存对象图像的特征以及所述待识别图像的特征,确定所述待识别图像与所述预存对象图像的匹配关系,包括:
    确定所述预存对象图像的特征以及所述待识别图像的特征;
    将所述二值掩码分别与所述预存对象图像的特征以及所述待识别图像的特征进行乘运算,得到对应所述预存对象图像的预存特征以及对应所述待识别图像的待识别特征;
    确定所述预存特征与所述待识别特征之间的相似度,当所述相似度超过相似度阈值时,确定所述待识别图像包括的对象与所述预存对象图像包括的对象属于相同类别。
  5. 根据权利要求1所述的方法,其中,所述查询二值掩码字典之前,所述方法还包括:
    基于对象图像数据库,构建由针对不同位置编号的对象图像样本对组成的训练样本集合;
    其中,所述对象图像样本对包括对象图像样本和经过遮挡处理的对象图像样本;
    基于基础对象识别模型以及掩码生成模型,构建成对差分孪生网络模型;
    基于所述训练样本集合,训练所述成对差分孪生网络模型;
    基于经过训练的成对差分孪生网络模型,构建所述二值掩码字典;
    其中,所述二值掩码字典的索引是所述遮挡二值图像块,所述二值掩码字典的索引项是所述二值掩码。
  6. 根据权利要求5所述的方法,其中,所述基于对象图像数据库,构建由针对不同位置编号的对象图像样本对组成的训练样本集合,包括:
    获取所述对象图像数据库中的对象图像样本,并对所述对象图像样本进行均匀分割,以获取对应不同对象图像样本块的位置编号;
    针对所述位置编号在所述对象图像样本中对应的对象图像样本块,进行遮挡处理;
    将所述对象图像样本以及经过遮挡处理的对象图像样本,构造为针对所述位置编号的对象图像样本对;
    基于不同位置编号的对象图像样本对,形成所述训练样本集合。
  7. 根据权利要求5所述的方法,其中,所述基于所述训练样本集合,训练所述成对差分孪生网络模型,包括:
    初始化所述成对差分孪生网络模型中的掩码生成模型,并初始化包括输入样本、输入样本特征、分类概率、以及所述掩码生成模型参数的损失函数;
    在所述成对差分孪生网络模型每次迭代训练过程中执行以下处理:
    将所述训练样本集合包括的对象图像样本对作为输入样本,通过所述成对差分孪生网络模型对所述输入样本进行特征提取,得到所述输入样本特征;
    通过所述对象识别模型对所述经过遮挡处理的对象图像样本进行分类识别,得到所述分类概率;
    将所述输入样本、所述输入样本特征和所述分类概率代入所述损失函数,以确定所述损失函数取得最小值时对应的成对差分孪生网络模型参数;
    根据所确定的掩码生成模型参数更新所述成对差分孪生网络模型。
  8. 根据权利要求7所述的方法,其中,所述将所述训练样本集合包括的对象图像样本对作为输入样本,通过所述成对差分孪生网络模型对所述输入样本进行特征提取,得到所述输入样本特征,包括:
    将所述训练样本集合中的针对同一位置编号的对象图像样本对作为所述输入样本,通过所述成对差分孪生网络模型中的卷积层对所述输入样本进行特征提取,得到分别对应所述对象图像样本和经过遮挡处理的对象图像样本的第一特征和第二特征;
    通过所述成对差分孪生网络模型中的掩码生成模型对所述第一特征和所述第二特征的差值的绝对值进行掩码生成处理,得到针对所述位置编号的掩码;
    通过所述掩码分别对所述第一特征以及所述第二特征进行乘运算,得到所述输入样本特征。
  9. 根据权利要求5所述的方法,其中,所述基于经过训练的成对差分孪生网络模型,构建所述二值掩码字典,包括:
    通过所述成对差分孪生网络模型对同一位置编号的对象图像样本对进行掩码提取,得到对应所述位置编号的掩码集合;
    对所述掩码集合中的每个掩码进行归一化处理,并基于所述每个掩码的归一化结果计算平均值,以确定对应所述位置编号的平均掩码;
    将对应所述位置编号的遮挡二值图像块作为所述二值掩码字典的索引,并对所述平均掩码进行二值化,以将生成的所述二值掩码作为所述二值掩码字典的索引项。
  10. 根据权利要求1所述的方法,其中,所述方法还包括:
    基于由对象图像数据库构成的训练样本集合,训练用于获取预存对象图像的特征以及所述待识别图像的特征的基础对象识别模型;
    基于所述训练样本集合,训练用于确定所述待识别图像与所述预存对象图像的匹配关系的对象识别模型;
    其中,所述对象识别模型包括所述基础对象识别模型以及二值掩码处理模块。
  11. 根据权利要求10所述的方法,其中,所述基于所述训练样本集合,训练用于确定所述待识别图像与所述预存对象图像的匹配关系的对象识别模型,包括:
    初始化所述对象识别模型的全连接层,并初始化包括输入样本、分类识别结果、以及所述对象识别模型中全连接层参数的损失函数;
    在所述对象识别模型每次迭代训练过程中执行以下处理:
    将所述训练样本集合包括的经过遮挡处理的对象图像样本以及在所述二值掩码字典中对应的二值掩码作为所述输入样本,通过所述对象识别模型对所述输入样本进行分类识别,得到对应所述输入样本的分类识别结果;
    将所述输入样本和所述分类识别结果代入所述损失函数,以确定所述损失函数取得最小值时对应的对象识别模型中全连接层参数;
    根据所确定的全连接层参数更新所述对象识别模型。
  12. 根据权利要求1所述的方法,其中,所述方法还包括:
    检测所述待识别图像中待识别对象的关键点,并确定所述关键点的坐标位置;
    根据所述关键点的坐标位置,对所述待识别对象进行仿射变换,以将所述关键点对齐到与所述预存对象图像一致的标准模板位置。
  13. 一种基于人工智能的对象识别装置,包括:
    遮挡检测模块,配置为检测待识别图像的待识别对象的潜在的遮挡区域,以获取表征所述待识别对象的遮挡区域以及未遮挡区域的二值图像;
    遮挡二值图像块获取模块,配置为从所述二值图像中获取表征所述遮挡区域的遮挡二值图像块;
    二值掩码查询模块,配置为基于所述遮挡二值图像块,查询二值掩码字典中所包括的遮挡二值图像块与二值掩码的映射关系,得到对应所述遮挡二值图像块的二值掩码;
    二值掩码合成模块,配置为将基于每个所述遮挡二值图像块查询到的二值掩码进行合成,得到对应所述二值图像的二值掩码;
    匹配关系确定模块,配置为基于对应所述二值图像的二值掩码、预存对象图像的特征以及所述待识别图像的特征,确定所述待识别图像与所述预存对象图像的匹配关系。
  14. 一种电子设备,包括:
    存储器,配置为存储可执行指令;
    处理器,配置为执行所述存储器中存储的可执行指令时,实现权利要求1至12任一项所述的基于人工智能的对象识别方法。
  15. 一种计算机可读存储介质,存储有可执行指令,用于被处理器执行时,实现权利要求1至12任一项所述的基于人工智能的对象识别方法。
PCT/CN2020/117764 2019-10-23 2020-09-25 对象识别方法、装置、电子设备及可读存储介质 WO2021077984A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/520,612 US20220058426A1 (en) 2019-10-23 2021-11-05 Object recognition method and apparatus, electronic device, and readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911013447.1 2019-10-23
CN201911013447.1A CN110728330A (zh) 2019-10-23 2019-10-23 基于人工智能的对象识别方法、装置、设备及存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/520,612 Continuation US20220058426A1 (en) 2019-10-23 2021-11-05 Object recognition method and apparatus, electronic device, and readable storage medium

Publications (1)

Publication Number Publication Date
WO2021077984A1 true WO2021077984A1 (zh) 2021-04-29

Family

ID=69222904

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/117764 WO2021077984A1 (zh) 2019-10-23 2020-09-25 对象识别方法、装置、电子设备及可读存储介质

Country Status (3)

Country Link
US (1) US20220058426A1 (zh)
CN (1) CN110728330A (zh)
WO (1) WO2021077984A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188995A (zh) * 2023-04-13 2023-05-30 国家基础地理信息中心 一种遥感图像特征提取模型训练方法、检索方法及装置

Families Citing this family (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110728330A (zh) * 2019-10-23 2020-01-24 腾讯科技(深圳)有限公司 基于人工智能的对象识别方法、装置、设备及存储介质
CN113468931B (zh) * 2020-03-31 2022-04-29 阿里巴巴集团控股有限公司 数据处理方法、装置、电子设备及存储介质
CN112507831B (zh) * 2020-05-22 2022-09-23 支付宝(杭州)信息技术有限公司 活体检测方法、装置、设备和存储介质
CN111724522B (zh) * 2020-05-25 2022-04-08 浙江大华技术股份有限公司 一种门禁控制系统、方法、装置、控制设备及存储介质
CN111695495B (zh) * 2020-06-10 2023-11-14 杭州萤石软件有限公司 人脸识别方法、电子设备及存储介质
CN111860343B (zh) * 2020-07-22 2023-04-28 杭州海康威视数字技术股份有限公司 确定人脸对比结果的方法及装置
CN111985340A (zh) * 2020-07-22 2020-11-24 深圳市威富视界有限公司 基于神经网络模型的人脸识别方法、装置和计算机设备
CN111860431B (zh) * 2020-07-30 2023-12-12 浙江大华技术股份有限公司 图像中对象的识别方法和装置、存储介质及电子装置
CN112001872B (zh) * 2020-08-26 2021-09-14 北京字节跳动网络技术有限公司 信息显示方法、设备及存储介质
CN112116525B (zh) * 2020-09-24 2023-08-04 百度在线网络技术(北京)有限公司 换脸识别方法、装置、设备和计算机可读存储介质
CN112115912B (zh) * 2020-09-28 2023-11-28 腾讯科技(深圳)有限公司 图像识别方法、装置、计算机设备及存储介质
US11605218B2 (en) * 2021-02-25 2023-03-14 Tata Consultancy Services Limited Systems and methods for constructing a modular Siamese network for face verification
CN113327284B (zh) * 2021-05-27 2022-08-26 北京百度网讯科技有限公司 图像识别方法、装置、电子设备和存储介质
CN113298049B (zh) * 2021-07-12 2021-11-02 浙江大华技术股份有限公司 图像特征降维方法、装置、电子设备和存储介质
CN114005095B (zh) * 2021-10-29 2023-06-30 北京百度网讯科技有限公司 车辆属性识别方法、装置、电子设备和介质
CN114092743B (zh) * 2021-11-24 2022-07-26 开普云信息科技股份有限公司 敏感图片的合规性检测方法、装置、存储介质及设备
CN114449345B (zh) * 2022-02-08 2023-06-23 腾讯科技(深圳)有限公司 视频处理方法、装置、设备及存储介质
WO2023241817A1 (en) * 2022-06-15 2023-12-21 Veridas Digital Authentication Solutions, S.L. Authenticating a person
CN115601318B (zh) * 2022-10-10 2023-05-02 广东昱升个人护理用品股份有限公司 快吸收低反渗纸尿裤智能生产方法及其系统
CN115620150B (zh) * 2022-12-05 2023-08-04 海豚乐智科技(成都)有限责任公司 基于孪生Transformer的多模态图像地面建筑识别方法及装置
CN116109991B (zh) * 2022-12-07 2024-01-09 北京百度网讯科技有限公司 模型的约束参数确定方法、装置及电子设备
CN116563926A (zh) * 2023-05-17 2023-08-08 智慧眼科技股份有限公司 一种人脸识别方法、系统、设备及计算机可读存储介质
CN116343201B (zh) * 2023-05-29 2023-09-19 安徽高哲信息技术有限公司 谷粒类别识别方法、装置及计算机设备

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091163A (zh) * 2014-07-19 2014-10-08 福州大学 一种消除遮挡影响的lbp人脸识别方法
CN104751108A (zh) * 2013-12-31 2015-07-01 汉王科技股份有限公司 人脸图像识别装置和人脸图像识别方法
US20160005189A1 (en) * 2012-09-21 2016-01-07 A9.Com, Inc. Providing overlays based on text in a live camera view
US20160127654A1 (en) * 2014-03-19 2016-05-05 A9.Com, Inc. Real-time visual effects for a live camera view
CN107292287A (zh) * 2017-07-14 2017-10-24 深圳云天励飞技术有限公司 人脸识别方法、装置、电子设备及存储介质
CN107679502A (zh) * 2017-10-12 2018-02-09 南京行者易智能交通科技有限公司 一种基于深度学习图像语义分割的人数估计方法
CN107909005A (zh) * 2017-10-26 2018-04-13 西安电子科技大学 基于深度学习的监控场景下人物姿态识别方法
CN110728330A (zh) * 2019-10-23 2020-01-24 腾讯科技(深圳)有限公司 基于人工智能的对象识别方法、装置、设备及存储介质

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107203999B (zh) * 2017-04-28 2020-01-24 北京航空航天大学 一种基于全卷积神经网络的皮肤镜图像自动分割方法
US11068741B2 (en) * 2017-12-28 2021-07-20 Qualcomm Incorporated Multi-resolution feature description for object recognition
US10671855B2 (en) * 2018-04-10 2020-06-02 Adobe Inc. Video object segmentation by reference-guided mask propagation
CN108805040A (zh) * 2018-05-24 2018-11-13 复旦大学 一种基于分块的有遮挡人脸识别算法
CN109271878B (zh) * 2018-08-24 2020-04-21 北京地平线机器人技术研发有限公司 图像识别方法、图像识别装置和电子设备
CN109670429B (zh) * 2018-12-10 2021-03-19 广东技术师范大学 一种基于实例分割的监控视频多目标人脸检测方法及系统
CN109784349B (zh) * 2018-12-25 2021-02-19 东软集团股份有限公司 图像目标检测模型建立方法、装置、存储介质及程序产品
CN109829448B (zh) * 2019-03-07 2021-05-28 苏州市科远软件技术开发有限公司 人脸识别方法、装置及存储介质
CN109934177A (zh) * 2019-03-15 2019-06-25 艾特城信息科技有限公司 行人再识别方法、系统及计算机可读存储介质
CN110084215A (zh) * 2019-05-05 2019-08-02 上海海事大学 一种二值化三元组孪生网络模型的行人重识别方法及系统
US10467142B1 (en) * 2019-05-07 2019-11-05 12 Sigma Technologies Enhancement of real-time response to request for detached data analytics
CN110210503B (zh) * 2019-06-14 2021-01-01 厦门历思科技服务有限公司 一种印章识别方法和装置以及设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160005189A1 (en) * 2012-09-21 2016-01-07 A9.Com, Inc. Providing overlays based on text in a live camera view
CN104751108A (zh) * 2013-12-31 2015-07-01 汉王科技股份有限公司 人脸图像识别装置和人脸图像识别方法
US20160127654A1 (en) * 2014-03-19 2016-05-05 A9.Com, Inc. Real-time visual effects for a live camera view
CN104091163A (zh) * 2014-07-19 2014-10-08 福州大学 一种消除遮挡影响的lbp人脸识别方法
CN107292287A (zh) * 2017-07-14 2017-10-24 深圳云天励飞技术有限公司 人脸识别方法、装置、电子设备及存储介质
CN107679502A (zh) * 2017-10-12 2018-02-09 南京行者易智能交通科技有限公司 一种基于深度学习图像语义分割的人数估计方法
CN107909005A (zh) * 2017-10-26 2018-04-13 西安电子科技大学 基于深度学习的监控场景下人物姿态识别方法
CN110728330A (zh) * 2019-10-23 2020-01-24 腾讯科技(深圳)有限公司 基于人工智能的对象识别方法、装置、设备及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116188995A (zh) * 2023-04-13 2023-05-30 国家基础地理信息中心 一种遥感图像特征提取模型训练方法、检索方法及装置
CN116188995B (zh) * 2023-04-13 2023-08-15 国家基础地理信息中心 一种遥感图像特征提取模型训练方法、检索方法及装置

Also Published As

Publication number Publication date
CN110728330A (zh) 2020-01-24
US20220058426A1 (en) 2022-02-24

Similar Documents

Publication Publication Date Title
WO2021077984A1 (zh) 对象识别方法、装置、电子设备及可读存储介质
CN110728209B (zh) 一种姿态识别方法、装置、电子设备及存储介质
US20210012198A1 (en) Method for training deep neural network and apparatus
CN109697416B (zh) 一种视频数据处理方法和相关装置
CN108288051B (zh) 行人再识别模型训练方法及装置、电子设备和存储介质
CN109145766A (zh) 模型训练方法、装置、识别方法、电子设备及存储介质
CN112800903B (zh) 一种基于时空图卷积神经网络的动态表情识别方法及系统
CN110348331B (zh) 人脸识别方法及电子设备
CN106022317A (zh) 人脸识别方法及装置
CN110163111A (zh) 基于人脸识别的叫号方法、装置、电子设备及存储介质
CN111368672A (zh) 一种用于遗传病面部识别模型的构建方法及装置
US20230033052A1 (en) Method, apparatus, device, and storage medium for training image processing model
CN111241989A (zh) 图像识别方法及装置、电子设备
CN110222718B (zh) 图像处理的方法及装置
WO2022105118A1 (zh) 基于图像的健康状态识别方法、装置、设备及存储介质
US20190236738A1 (en) System and method for detection of identity fraud
CN111767900A (zh) 人脸活体检测方法、装置、计算机设备及存储介质
CN112215180A (zh) 一种活体检测方法及装置
WO2019200702A1 (zh) 去网纹系统训练方法、去网纹方法、装置、设备及介质
CN113205002B (zh) 非受限视频监控的低清人脸识别方法、装置、设备及介质
WO2021143101A1 (zh) 人脸识别方法和人脸识别装置
CN115050064A (zh) 人脸活体检测方法、装置、设备及介质
WO2021042544A1 (zh) 基于去网纹模型的人脸验证方法、装置、计算机设备及存储介质
Wang et al. A study of convolutional sparse feature learning for human age estimate
CN112381064A (zh) 一种基于时空图卷积网络的人脸检测方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20879641

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20879641

Country of ref document: EP

Kind code of ref document: A1