CN112784985A - Training method and device of neural network model, and image recognition method and device - Google Patents

Training method and device of neural network model, and image recognition method and device Download PDF

Info

Publication number
CN112784985A
CN112784985A CN202110129925.6A CN202110129925A CN112784985A CN 112784985 A CN112784985 A CN 112784985A CN 202110129925 A CN202110129925 A CN 202110129925A CN 112784985 A CN112784985 A CN 112784985A
Authority
CN
China
Prior art keywords
network
training
network module
neural network
super
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110129925.6A
Other languages
Chinese (zh)
Inventor
希滕
张刚
温圣召
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110129925.6A priority Critical patent/CN112784985A/en
Publication of CN112784985A publication Critical patent/CN112784985A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a training method and device of a neural network model and an image recognition method and device, and relates to the field of artificial intelligence, in particular to the field of computer vision and deep learning. The implementation scheme is as follows: acquiring at least two training sets with different recognition difficulty levels; determining a training set corresponding to each network module according to the position of the network module in the neural network model, wherein for any two network modules in the network modules, the identification difficulty level of the training set corresponding to the network module close to the input side of the neural network model is not greater than the identification difficulty level of the training set corresponding to another network module; and adjusting the parameters of the network module by utilizing the training set corresponding to each network module in the plurality of network modules.

Description

Training method and device of neural network model, and image recognition method and device
Technical Field
The present disclosure relates to the field of artificial intelligence, and in particular, to the field of computer vision and deep learning, and more particularly, to a training method and apparatus for a neural network model, an image recognition method and apparatus, a computer device, a computer-readable storage medium, and a computer program product.
Background
Artificial intelligence is the subject of research that makes computers simulate some human mental processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.), both at the hardware level and at the software level. The artificial intelligence hardware technology generally comprises technologies such as a sensor, a special artificial intelligence chip, cloud computing, distributed storage, big data processing and the like, and the artificial intelligence software technology mainly comprises a computer vision technology, a voice recognition technology, a natural language processing technology, machine learning/deep learning, a big data processing technology, a knowledge graph technology and the like.
The approaches described in this section are not necessarily approaches that have been previously conceived or pursued. Unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, unless otherwise indicated, the problems mentioned in this section should not be considered as having been acknowledged in any prior art.
Disclosure of Invention
The present disclosure provides a training method and apparatus of a neural network model, an image recognition method and apparatus, a computer device, a computer readable storage medium, and a computer program product.
According to an aspect of the present disclosure, there is provided a training method of a neural network model, wherein the neural network model includes a plurality of cascaded network modules, the training method including: acquiring at least two training sets with different recognition difficulty levels; determining a training set corresponding to each network module according to the position of the network module in the neural network model, wherein for any two network modules in the network modules, the identification difficulty level of the training set corresponding to the network module close to the input side of the neural network model is not greater than the identification difficulty level of the training set corresponding to another network module; and adjusting the parameters of the network module by utilizing the training set corresponding to each network module in the plurality of network modules.
According to another aspect of the present disclosure, there is provided an image recognition method including: acquiring a neural network model obtained by training according to the training method, wherein the neural network model comprises a plurality of cascaded network modules; inputting an image to be recognized into a neural network model; for any network module in a plurality of cascaded network modules, responding to the fact that the network module receives an image to be recognized or a feature map output by the last network module, and outputting the prediction classification and the confidence coefficient of the image to be recognized by the network module; and determining the prediction classification as a recognition result in response to the confidence coefficient being greater than a preset threshold value.
According to another aspect of the present disclosure, there is provided a training apparatus for a neural network model, wherein the neural network model includes a plurality of cascaded network modules, the training apparatus including: a first acquisition unit configured to acquire at least two training sets having different recognition difficulty levels; the first determining unit is configured to determine a training set corresponding to each of the plurality of network modules according to the position of the network module in the neural network model, wherein for any two network modules in the plurality of network modules, the recognition difficulty level of the training set corresponding to the network module close to the input side of the neural network model is not greater than the recognition difficulty level of the training set corresponding to another network module; and the adjusting unit is configured to adjust the parameters of the network module by using the training set corresponding to each of the plurality of network modules.
According to another aspect of the present disclosure, there is provided an image recognition apparatus including: training the obtained neural network model according to the training method, wherein the neural network model comprises a plurality of cascaded network modules; an input unit configured to input an image to be recognized into the neural network model; the neural network model is configured to respond to any one of a plurality of cascaded network modules that receives an image to be recognized or a feature map output by the last network module, and the network module outputs a prediction classification and a confidence coefficient of the image to be recognized; a fourth determination unit configured to determine the prediction classification as the recognition result in response to the confidence being greater than a preset threshold.
According to another aspect of the present disclosure, there is provided a computer device including: a memory, a processor and a computer program stored on the memory, wherein the processor is configured to execute the computer program to implement the steps of the above method.
According to another aspect of the present disclosure, a non-transitory computer readable storage medium is provided, having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps of the method described above.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program, wherein the computer program realizes the steps of the above-described method when executed by a processor.
According to one or more embodiments of the disclosure, the final output result and the intermediate output result can be focused simultaneously in the training process of the neural network model, so that the pertinence of training different network modules in the neural network model is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the embodiments and, together with the description, serve to explain the exemplary implementations of the embodiments. The illustrated embodiments are for purposes of illustration only and do not limit the scope of the claims. Throughout the drawings, identical reference numbers designate similar, but not necessarily identical, elements.
FIG. 1 illustrates a schematic diagram of an exemplary system in which various methods described herein may be implemented, according to an embodiment of the present disclosure;
FIG. 2 shows a flow diagram of a method of training a neural network model in accordance with an embodiment of the present disclosure;
FIG. 3 shows a flow diagram of a method of obtaining a training set according to an embodiment of the present disclosure;
FIG. 4 shows a schematic structural diagram of a super network according to an embodiment of the present disclosure;
FIG. 5 shows a flow diagram of an image recognition method according to an embodiment of the present disclosure;
FIG. 6 shows a block diagram of a training apparatus for a neural network model, according to an embodiment of the present disclosure;
fig. 7 shows a block diagram of the structure of an image recognition apparatus according to an embodiment of the present disclosure;
FIG. 8 illustrates a block diagram of an exemplary electronic device that can be used to implement embodiments of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In the present disclosure, unless otherwise specified, the use of the terms "first", "second", etc. to describe various elements is not intended to limit the positional relationship, the timing relationship, or the importance relationship of the elements, and such terms are used only to distinguish one element from another. In some examples, a first element and a second element may refer to the same instance of the element, and in some cases, based on the context, they may also refer to different instances.
The terminology used in the description of the various described examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically limited, the elements may be one or more. Furthermore, the term "and/or" as used in this disclosure is intended to encompass any and all possible combinations of the listed items.
As one aspect of the application of artificial intelligence techniques, a computer may be enabled to simulate a human mental process based on a trained neural network model to recognize input data.
In the related art, once training of the neural network model is completed, the neural network model is applied to a data processing process in a fixed model. This affects the data processing efficiency in the application process of the data processing.
Based on this, the present disclosure provides a training method and apparatus for a neural network model, an image recognition method and apparatus, a computer device, a computer readable storage medium, and a computer program product, which respectively train each of a plurality of network modules cascaded in the neural network model using training sets with different recognition difficulty levels. Therefore, in the training process of the neural network model, the final output result and the intermediate output result of the neural network model can be concerned at the same time, and the network modules at different positions are trained in a targeted manner. Based on this, the neural network model trained according to the present disclosure can adaptively select different model structures according to input data to process the input data. Therefore, when the neural network model obtained through training in the method is applied to the field of image recognition, different network structures can be adaptively selected to process the image to be recognized according to the difficulty level of the input image to be recognized, and the recognition efficiency is improved under the condition that the recognition accuracy is ensured.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
Fig. 1 illustrates a schematic diagram of an exemplary system 100 in which various methods and apparatus described herein may be implemented in accordance with embodiments of the present disclosure. Referring to fig. 1, the system 100 includes one or more client devices 101, 102, 103, 104, 105, and 106, a server 120, and one or more communication networks 110 coupling the one or more client devices to the server 120. Client devices 101, 102, 103, 104, 105, and 106 may be configured to execute one or more applications.
In embodiments of the present disclosure, the server 120 may run one or more services or software applications that enable the training method or the image recognition method of the neural network model to be performed.
In some embodiments, the server 120 may also provide other services or software applications that may include non-virtual environments and virtual environments. In certain embodiments, these services may be provided as web-based services or cloud services, for example, provided to users of client devices 101, 102, 103, 104, 105, and/or 106 under a software as a service (SaaS) model.
In the configuration shown in fig. 1, server 120 may include one or more components that implement the functions performed by server 120. These components may include software components, hardware components, or a combination thereof, which may be executed by one or more processors. A user operating a client device 101, 102, 103, 104, 105, and/or 106 may, in turn, utilize one or more client applications to interact with the server 120 to take advantage of the services provided by these components. It should be understood that a variety of different system configurations are possible, which may differ from system 100. Accordingly, fig. 1 is one example of a system for implementing the various methods described herein and is not intended to be limiting.
The user may use the client device 101, 102, 103, 104, 105, and/or 106 to obtain the image to be recognized and the results of the image recognition. The client device may provide an interface that enables a user of the client device to interact with the client device. The client device may also output information to the user via the interface. Although fig. 1 depicts only six client devices, those skilled in the art will appreciate that any number of client devices may be supported by the present disclosure.
Client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and so forth. These computer devices may run various types and versions of software applications and operating systems, such as Microsoft Windows, Apple iOS, UNIX-like operating systems, Linux, or Linux-like operating systems (e.g., Google Chrome OS); or include various Mobile operating systems, such as Microsoft Windows Mobile OS, iOS, Windows Phone, Android. Portable handheld devices may include cellular telephones, smart phones, tablets, Personal Digital Assistants (PDAs), and the like. Wearable devices may include head mounted displays and other devices. The gaming system may include a variety of handheld gaming devices, internet-enabled gaming devices, and the like. The client device is capable of executing a variety of different applications, such as various Internet-related applications, communication applications (e.g., email applications), Short Message Service (SMS) applications, and may use a variety of communication protocols.
Network 110 may be any type of network known to those skilled in the art that may support data communications using any of a variety of available protocols, including but not limited to TCP/IP, SNA, IPX, etc. By way of example only, one or more networks 110 may be a Local Area Network (LAN), an ethernet-based network, a token ring, a Wide Area Network (WAN), the internet, a virtual network, a Virtual Private Network (VPN), an intranet, an extranet, a Public Switched Telephone Network (PSTN), an infrared network, a wireless network (e.g., bluetooth, WIFI), and/or any combination of these and/or other networks.
The server 120 may include one or more general purpose computers, special purpose server computers (e.g., PC (personal computer) servers, UNIX servers, mid-end servers), blade servers, mainframe computers, server clusters, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architecture involving virtualization (e.g., one or more flexible pools of logical storage that may be virtualized to maintain virtual storage for the server). In various embodiments, the server 120 may run one or more services or software applications that provide the functionality described below.
The computing units in server 120 may run one or more operating systems including any of the operating systems described above, as well as any commercially available server operating systems. The server 120 may also run any of a variety of additional server applications and/or middle tier applications, including HTTP servers, FTP servers, CGI servers, JAVA servers, database servers, and the like. Server 120 may also include one or more applications to display data feeds and/or real-time events via one or more display devices of client devices 101, 102, 103, 104, 105, and 106.
In some embodiments, the server 120 may include one or more applications, for example, applications of services such as object detection and recognition, signal conversion, and the like based on data such as image, video, voice, text, digital signals, and the like, to process deep learning task requests such as voice interaction, text classification, image recognition, or key point detection, received from the client devices 101, 102, 103, 104, 105, and 106, and accept various media data as training sample data of the deep learning task, such as image data, audio data, or text data. The server can also train the neural network model by utilizing the training sample according to a specific deep learning task, test each sub-network in the super-network module, and determine the structure and parameters of the neural network model for executing the deep learning task according to the test result of each sub-network. After the training of the neural network model is completed, the server 120 may also automatically search out an adaptive network structure through the model structure to perform a corresponding task.
In some embodiments, the server 120 may be a server of a distributed system, or a server incorporating a blockchain. The server 120 may also be a cloud server, or a smart cloud computing server or a smart cloud host with artificial intelligence technology. The cloud Server is a host product in a cloud computing service system, and is used for solving the defects of high management difficulty and weak service expansibility in the traditional physical host and Virtual Private Server (VPS) service.
The system 100 may also include one or more databases 130. In some embodiments, these databases may be used to store data and other information. For example, one or more of the databases 130 may be used to store information such as audio files and video files. The data store 130 may reside in various locations. For example, the data store used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. The data store 130 may be of different types. In certain embodiments, the data store used by the server 120 may be a database, such as a relational database. One or more of these databases may store, update, and retrieve data to and from the database in response to the command.
In some embodiments, one or more of the databases 130 may also be used by applications to store application data. The databases used by the application may be different types of databases, such as key-value stores, object stores, or regular stores supported by a file system.
The system 100 of fig. 1 may be configured and operated in various ways to enable application of the various methods and apparatus described in accordance with the present disclosure.
Fig. 2 is a flowchart illustrating a method of training a neural network model, wherein the neural network model includes a plurality of cascaded network modules, according to an exemplary embodiment of the present disclosure. As shown in fig. 2, the method may include: step S201, obtaining at least two training sets with different recognition difficulty levels; step S202, determining a training set corresponding to each network module according to the position of the network module in the neural network model, wherein for any two network modules in the plurality of network modules, the identification difficulty level of the training set corresponding to the network module close to the input side of the neural network model is not greater than the identification difficulty level of the training set corresponding to another network module; and step S203, adjusting the parameters of the network module by using the training set corresponding to each network module in the plurality of network modules. Therefore, in the training process of the neural network model, the final output result and the intermediate output result can be concerned at the same time, and different training sets with identification difficulty level differences are adopted to carry out targeted training on network modules at different positions.
The neural network model obtained by training in the method can be applied to executing a corresponding deep learning task and processing corresponding media data, such as an image recognition task, a target detection task based on media data such as images and the like. According to some embodiments, the neural network model may be a multi-layer neural network model, wherein the network modules may be a certain number of consecutive layers therein. For example, the neural network model may be a 10-layer neural network model, wherein the first 5 layers constitute one network module in the neural network model, wherein the second 5 layers constitute another network module in the neural network model. The number of layers included in the neural network model and the network modules therein is not limited herein.
With respect to step S201, according to some embodiments, the training set with a particular difficulty level includes training samples corresponding to the recognition difficulty level of the training set. For example, the higher the recognition difficulty level of a training sample, the less easily the training sample is recognized.
According to some embodiments, the training sample may be a sample image.
For example, when only one complete cat is included in the sample image, the identification difficulty level of the sample image is low (i.e., one cat in the sample image is easily identified); when the sample image includes other interferents or only includes a part of a cat, the identification difficulty of the sample image is high (i.e. it is difficult to identify a cat in the sample image).
According to some embodiments, fig. 3 is a flowchart illustrating a method of obtaining at least two training sets with different recognition difficulty levels according to an exemplary embodiment of the present disclosure, which may include: s301, obtaining a plurality of training samples; step S302, responding to the input of each training sample in a plurality of training samples into a trained hierarchical model, and outputting the prediction classification and the confidence coefficient of the training sample by the hierarchical model; step S303, determining the recognition difficulty level of each training sample by using at least one of the prediction classification and the confidence coefficient of each training sample in a plurality of training samples; and S304, constructing a training set with corresponding recognition difficulty levels based on at least one training sample with the same recognition difficulty level. Therefore, training sets with different recognition difficulty levels can be acquired quickly.
With respect to step S303, in an embodiment, each of the plurality of training samples may have a corresponding real classification, and thus the recognition difficulty level of each of the plurality of training samples may be determined according to the consistency between the predicted classification and the real classification of the training sample. Specifically, under the condition that the prediction classification is consistent with the real classification, the recognition difficulty level of the training sample is determined to be low; and under the condition that the prediction classification is inconsistent with the real classification, determining that the recognition difficulty level of the training sample is higher.
In another embodiment, the recognition difficulty level of each of the plurality of training samples may be determined according to the confidence level of the training sample. Specifically, the confidence of a training sample is inversely related to the recognition difficulty level of the training sample.
In another embodiment, the recognition difficulty level of the training sample can be determined according to the consistency of the prediction classification and the real classification of the training sample and the confidence of the training sample.
According to some embodiments, the recognition difficulty level of each of the plurality of training samples can be determined in a manual labeling manner, and a required training set with a corresponding recognition difficulty level is constructed.
In step S202, for any two network modules in the plurality of network modules, the recognition difficulty level of the training set corresponding to the network module close to the input side of the neural network model is not greater than the recognition difficulty level of the training set corresponding to another network module. Namely, the recognition difficulty level of the training set corresponding to the network module closer to the input side of the neural network model is smaller, and the recognition difficulty level of the training set corresponding to the network module farther from the input side of the neural network model is larger.
With respect to step S203, according to some embodiments, for any two network modules in the plurality of network modules, the parameters of the network module close to the input side of the image recognition module corresponding model are adjusted. Therefore, the subsequent network module can be trained on the basis of the trained network module close to the input side of the neural network model, and the training effect on the subsequent network module can be improved.
Specifically, a first network module at the input end of the neural network model may be trained first. The training set corresponding to the first network module comprises a plurality of training samples and real classifications thereof. And inputting each training sample in a plurality of training samples of the training set into a neural network model to obtain the prediction classification of the training sample output by the first network module. The parameters of the first network module are adjusted based on the true classification and the predicted classification of each of the plurality of training samples. Thus, the training of the first network module in the neural network model can be realized. Because the first network module is trained by adopting the training set with smaller identification difficulty level, the first network module obtained by training can more easily extract the characteristic information from the image to be identified with smaller identification difficulty level, and the identification effect is improved.
After the parameters of the first network module are determined, the next network module (i.e. the second network module) in the neural network model, which is cascaded with the first network module, is trained. The training set corresponding to the second network module comprises a plurality of training samples and real classifications thereof. And inputting each training sample in a plurality of training samples of the training set into a neural network model to obtain the prediction classification of the training sample output by the second network module. The parameters of the second network module are adjusted based on the true classification and the predicted classification of each of the plurality of training samples. In the process of adjusting the parameters of the second network module, although the first network module also participates in the calculation of the training sample, the parameters of the first network module are not changed any more. Thus, the training of the second network module in the neural network model can be realized. Because the second network module can adopt the training set with higher recognition difficulty level (compared with the training set corresponding to the first network module) for training, the second network module obtained by training can more easily extract the characteristic information from the image to be recognized with higher recognition difficulty level, and the recognition effect is improved.
After the parameters of the second network module are determined, the next network module (i.e., the third network module) in the neural network model, which is cascaded with the second network module, is trained, and so on, so that the training of each network module in the neural network model can be completed in sequence.
According to some embodiments, the plurality of cascaded network modules includes at least one super network module, and each of the at least two training sets with different recognition difficulty levels includes a plurality of training samples corresponding to the recognition difficulty levels of the training set and true classifications thereof. Adjusting the parameters of the network module using the training set corresponding to each of the plurality of network modules comprises: aiming at each hyper-network module in at least one hyper-network module and a corresponding training set thereof, responding to each training sample in a plurality of training samples in the training set and inputting a neural network model, and acquiring the predicted classification and the first calculation time length of the training sample output by the hyper-network module; and adjusting the parameters of the hyper-network module based on the real classification, the predicted classification and the first calculation duration of each training sample in a plurality of training samples in the training set. Therefore, the training super-network module has good performance in speed and precision.
A super network may be a network structure that contains multiple sub-networks simultaneously. The super network may include multiple layers of search spaces, each layer including multiple substructures. Each sub-structure may be a layer of structure in a sub-network. FIG. 4 illustrates a schematic diagram of a super network in an exemplary embodiment. As shown in FIG. 4, the super network 400 includes 3 levels of search spaces, each level of search space including 3 sub-structures.
It can be understood that the search space in the super network is not limited to three layers, the number of substructures in each layer of the search space is not limited to three, and the number of substructures in different search spaces may be the same or different. The number of substructures in each layer shown in fig. 4 is merely an example, and does not limit the present disclosure.
A sub-structure is determined from each layer of search space of the super network, and a sub-network of the super network can be formed after the sub-structures are sequentially connected according to the layer sequence of the search space.
In one embodiment, for example, using the super network 400 shown in FIG. 4 as an example, a sub-structure 412 is selected from the first layer search space, a sub-structure 423 is selected from the second layer search space, a sub-structure 431 is selected from the third layer search space, and the selected sub-structures are sequentially connected to obtain a sub-network. In another embodiment, different subnetworks may also be obtained by varying the number of layers.
According to some embodiments, the first computation duration is the time taken for the GPU to process the training task for each training sample. In response to a training sample being input to the neural network model, the first computation duration is a GPU processing duration from the training sample being input to the neural network model to the predicted classification of the training sample output by the hyper-network module.
The super network may include a plurality of sub-networks at the same time, and the first calculation time periods corresponding to different sub-networks may be different. Therefore, in the process of training the super network module, the parameters of the super network module are adjusted based on the real classification, the prediction classification and the first calculation time length of each training sample in a plurality of training samples in the training set, the calculation accuracy (the calculation accuracy is determined according to the real classification and the prediction classification of the training samples) and the calculation speed (the calculation speed is determined according to the first calculation time length) of each sub-network in the super network module can be integrated, and the parameters of the super network module are adjusted until the super network module converges.
According to some embodiments, the super network module comprises at least two sub-networks, the training method further comprising: for each super network module in at least one super network module, determining one sub network in at least two sub networks of the super network module as a selected sub network according to a preset test mode, wherein for any two super network modules in at least one super network module, determining the selected sub network of the super network module close to the input side of the neural network model; and determining the trained neural network model based on the selected sub-network of each super network module in the at least one super network module. Therefore, the optimal sub-network of the super-network module can be determined, and the application effect of the trained neural network model is improved.
And aiming at any two super network modules in at least one super network module, determining a selected sub network of the super network module close to the input side of the neural network model. Thus, the selected sub-network of the subsequent super network module can be determined on the basis of the determined selected sub-network of the super network module close to the input side of the neural network model, and therefore the accuracy of the determined selected sub-network can be improved.
According to some embodiments, the preset test mode comprises: acquiring a test set, wherein the test set comprises a plurality of test samples and real classifications thereof; for each of at least one hyper-network module, responding to each of a plurality of test samples in a test set and inputting a neural network model, obtaining a predicted classification of the test sample output by each of at least two sub-networks of the hyper-network module and a second calculation time length corresponding to the sub-network, and determining one of the at least two sub-networks of the hyper-network module as a selected sub-network based on a real classification, the predicted classification and the second calculation time length of each of the plurality of test samples. Thus, the optimal sub-network can be determined from the super network module according to the requirements for the identification accuracy and the identification speed.
According to some embodiments, the test sample may be a test image.
According to some embodiments, the test set may be adaptively selected based on the actual application scenario. For example, the identification difficulty level of the test sample contained in the test set or the proportion of the number of test samples with different identification difficulty levels in the test set may be determined according to the actual application scenario.
In the process of testing the super network module, one of at least two sub-networks of the super network module is determined as a selected sub-network based on the real classification, the prediction classification and the second calculation time length of each test sample in a plurality of test samples, and the sub-network with the best effect can be selected by integrating the calculation accuracy (the calculation accuracy is determined according to the real classification and the prediction classification of the test samples) and the calculation speed (the calculation speed is determined according to the second calculation time length) of each sub-network in the super network module.
Fig. 5 is a flowchart illustrating an image recognition method according to an exemplary embodiment of the present disclosure, which may include, as shown in fig. 5: step S501, obtaining a neural network model obtained by training according to the training method, wherein the neural network model comprises a plurality of cascaded network modules; step S502, inputting an image to be identified into a neural network model; step S503, aiming at any one of the plurality of cascaded network modules, responding to the fact that the network module receives the image to be recognized or the feature map output by the last network module, and outputting the prediction classification and the confidence coefficient of the image to be recognized by the network module; and step S504, responding to the confidence coefficient being larger than a preset threshold value, and determining the prediction classification as a recognition result. The neural network model trained according to the training method can adaptively select and adopt all or part of the neural network model to realize the recognition of the image to be recognized according to the specific image to be recognized, so that different network structures can be adaptively selected to process the image to be recognized according to the difficulty degree of the input image to be recognized, wherein the simple image to be recognized can be rapidly subjected to image recognition processing by only adopting the part of the neural network model close to the input side, and therefore, the image recognition efficiency is improved under the condition of ensuring the recognition accuracy.
Meanwhile, when the trained neural network model is used for executing a deep learning task, for example, an image recognition task, the hardware device can select a network structure matched with the difficulty level of the current input data according to the difficulty level of the current input data to execute processing, so that in the process of processing a large amount of input data with different difficulty levels, the computing resources, the memory resources, the video memory resources and the like of the hardware device can be effectively saved, the operation processing capacity and the processing efficiency of the hardware device in the process of executing the deep learning task are improved, and the configuration requirements of the deep learning task on the hardware device are reduced. For step S504, it may be determined whether to terminate the image recognition process according to the confidence level output by the network module.
Specifically, for any network module in the plurality of cascaded network modules, when the confidence coefficient output by the network module is greater than a preset threshold, it is determined that the prediction classification output by the network module is a recognition result, and the subsequent network module of the neural network model does not participate in the recognition processing of the image to be recognized any more.
For example, when the recognition difficulty level of the input image to be recognized is low, the ideal confidence can be obtained only by the partial network module close to the input end in the neural network model, so that the ideal recognition result can be quickly obtained without losing the calculation accuracy. On the contrary, when the identification difficulty level of the input image to be identified is higher, calculation can be carried out through all network modules of the neural network model so as to improve the identification precision.
According to some embodiments, for any network module of the plurality of cascaded network modules that is not located at the output end of the neural network model, in response to the confidence being less than a preset threshold, the feature map output by that network module is input to the next network module. Therefore, under the condition that the confidence coefficient is not ideal, the subsequent network module is adopted to further calculate the image to be recognized, and the recognition precision is improved.
According to some embodiments, for a network module at the output of the neural network model, in response to the confidence level being less than a preset threshold, the prediction classification is determined as a recognition result.
According to another aspect of the present disclosure, as shown in fig. 6, there is also provided a neural network model training apparatus 600, wherein the neural network model includes a plurality of cascaded network modules, the training apparatus 600 includes: a first obtaining unit 601 configured to obtain at least two training sets with different recognition difficulty levels; a first determining unit 602, configured to determine, according to a position of each of the plurality of network modules in the neural network model, a training set corresponding to the network module, where, for any two network modules in the plurality of network modules, a recognition difficulty level of the training set corresponding to the network module close to an input side of the neural network model is not greater than a recognition difficulty level of the training set corresponding to another network module; and an adjusting unit 603 configured to adjust parameters of each of the plurality of network modules by using the training set corresponding to the network module.
According to some embodiments, the adjustment unit is further configured for: for any two network modules in the plurality of network modules, parameters of the network module close to the input side of the neural network model are adjusted.
According to some embodiments, the plurality of cascaded network modules includes at least one super network module, each of at least two training sets with different recognition difficulty levels includes a plurality of training samples corresponding to the recognition difficulty levels of the training set and their true classifications, and the adjusting unit includes: the first obtaining subunit is configured to, for each of at least one super-network module and a training set corresponding to the super-network module, input a neural network model in response to each of a plurality of training samples in the training set, and obtain a predicted classification and a first calculation duration of the training sample output by the super-network module; and the adjusting subunit is configured to adjust the parameter of the hyper-network module based on the real classification, the predicted classification and the first calculation duration of each of the plurality of training samples in the training set.
According to some embodiments, the super network module comprises at least two sub-networks, the training device further comprising: a second determining unit, configured to determine, for each of the at least one super network module, according to the preset testing unit, one of the at least two sub-networks of the super network module as a selected sub-network, wherein, for any two super network modules of the at least one super network module, the selected sub-network of the super network module close to the input side of the neural network model is determined first; a third determining unit configured to determine the trained neural network model based on the selected sub-network of each of the at least one super network module.
According to some embodiments, the preset test unit includes: a second obtaining subunit configured to obtain a test set, wherein the test set includes a plurality of test samples and real classifications thereof; the first determining subunit is configured to, for each of the at least one hyper-network module, input a neural network model in response to each of a plurality of test samples in a test set, obtain a predicted classification of the test sample output by each of at least two sub-networks of the hyper-network module and a second calculated time duration corresponding to the sub-network, and determine one of the at least two sub-networks of the hyper-network module as a selected sub-network based on the true classification, the predicted classification, and the second calculated time duration of each of the plurality of test samples.
According to some embodiments, the first obtaining unit comprises: a third obtaining subunit configured to obtain a plurality of training samples; the fourth obtaining subunit is configured to, in response to inputting each of the plurality of training samples into the trained hierarchical model, obtain the prediction classification and the confidence thereof, which the hierarchical model outputs, of the training sample; a second determining subunit, configured to determine a recognition difficulty level of each of the plurality of training samples by using at least one of the prediction classification and the confidence of the training sample; a construction subunit configured to construct a training set having a corresponding recognition difficulty level based on at least one training sample having the same recognition difficulty level.
According to another aspect of the present disclosure, as shown in fig. 7, there is also provided an image recognition apparatus 700, including: the neural network model 701 is obtained through training according to the training method, wherein the neural network model 701 comprises a plurality of cascaded network modules 701-1-701-n; an input unit 702 configured to input an image to be recognized into the neural network model 701; the neural network model 701 is configured to, for any network module of the plurality of cascaded network modules, in response to the network module receiving the image to be recognized or the feature map output by the last network module, the network module outputs a prediction classification of the image to be recognized and a confidence thereof; a fourth determining unit 703 configured to determine that the prediction is classified as the recognition result in response to the confidence being greater than a preset threshold.
According to some embodiments, the fourth determination unit is further configured to: and for any network module which is not positioned at the output end of the neural network model in the plurality of cascaded network modules, responding to the confidence coefficient smaller than a preset threshold value, and inputting the feature map output by the network module to the next network module.
According to another aspect of the present disclosure, there is also provided a computer device comprising: a memory, a processor and a computer program stored on the memory, wherein the processor is configured to execute the computer program to implement the steps of the above method.
According to another aspect of the present disclosure, there is also provided a non-transitory computer readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps of the above-described method.
According to another aspect of the present disclosure, there is also provided a computer program product comprising a computer program, wherein the computer program realizes the steps of the above-mentioned method when executed by a processor.
According to an embodiment of the present disclosure, there is also provided an electronic device, a readable storage medium, and a computer program product.
Referring to fig. 8, a block diagram of a structure of an electronic device 800, which may be a server or a client of the present disclosure, which is an example of a hardware device that may be applied to aspects of the present disclosure, will now be described. Electronic device is intended to represent various forms of digital electronic computer devices, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other suitable computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 800 includes a computing unit 801 that can perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM)802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
A number of components in the device 800 are connected to the I/O interface 805, including: an input unit 806, an output unit 807, a storage unit 808, and a communication unit 809. The input unit 806 may be any type of device capable of inputting information to the device 800, and the input unit 806 may receive input numeric or character information and generate key signal inputs related to user settings and/or function controls of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a track pad, a track ball, a joystick, a microphone, and/or a remote control. Output unit 807 can be any type of device capable of presenting information and can include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer. The storage unit 808 may include, but is not limited to, a magnetic disk, an optical disk. The communication unit 809 allows the device 800 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunications networks, and may include, but is not limited to, modems, network cards, infrared communication devices, wireless communication transceivers and/or chipsets, such as bluetooth (TM) devices, 1302.11 devices, WiFi devices, WiMax devices, cellular communication devices, and/or the like.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 801 executes the respective methods and processes described above, such as a training method of a neural network model or an image recognition method. For example, in some embodiments, the training method or the image recognition method of the neural network model may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as the storage unit 808. In some embodiments, part or all of the computer program can be loaded and/or installed onto device 800 via ROM 802 and/or communications unit 809. When the computer program is loaded into the RAM 803 and executed by the computing unit 801, one or more steps of the training method of the neural network model or the image recognition method described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the training method of the neural network model or the image recognition method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be performed in parallel, sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it is to be understood that the above-described methods, systems and apparatus are merely exemplary embodiments or examples and that the scope of the present invention is not limited by these embodiments or examples, but only by the claims as issued and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced with equivalents thereof. Further, the steps may be performed in an order different from that described in the present disclosure. Further, various elements in the embodiments or examples may be combined in various ways. It is important that as technology evolves, many of the elements described herein may be replaced with equivalent elements that appear after the present disclosure.

Claims (20)

1. A method of training a neural network model, wherein the neural network model comprises a plurality of cascaded network modules, the method comprising:
acquiring at least two training sets with different recognition difficulty levels;
determining a training set corresponding to each network module according to the position of the network module in the neural network model, wherein,
aiming at any two network modules in the plurality of network modules, wherein the recognition difficulty level of the training set corresponding to the network module close to the input side of the neural network model is not greater than the recognition difficulty level of the training set corresponding to another network module; and
and adjusting the parameters of the network module by utilizing the training set corresponding to each network module in the plurality of network modules.
2. The training method of claim 1, wherein the adjusting the parameters of each of the plurality of network modules using the training set corresponding to the network module comprises:
and aiming at any two network modules in the plurality of network modules, firstly adjusting parameters of the network module close to the input side of the neural network model.
3. The training method according to claim 1 or 2, wherein the plurality of cascaded network modules comprises at least one hyper-network module, each of the at least two training sets with different recognition difficulty levels comprises a plurality of training samples corresponding to the recognition difficulty levels of the training set and real classifications thereof,
the adjusting the parameters of the network module by using the training set corresponding to each of the plurality of network modules includes:
aiming at each hyper-network module in the at least one hyper-network module and a corresponding training set thereof, responding to each training sample in a plurality of training samples in the training set and inputting the training sample into the neural network model, and acquiring the predicted classification and the first calculation duration of the training sample output by the hyper-network module;
and adjusting the parameters of the hyper-network module based on the real classification, the predicted classification and the first calculation duration of each training sample in a plurality of training samples in the training set.
4. The training method of claim 3, wherein the super network module comprises at least two sub-networks, the training method further comprising:
for each super network module in the at least one super network module, determining one sub network of at least two sub networks of the super network module as a selected sub network according to a preset test mode, wherein for any two super network modules in the at least one super network module, determining the selected sub network of the super network module close to the input side of the neural network model;
determining a trained neural network model based on the selected sub-network of each of the at least one hyper-network module.
5. The training method of claim 4, wherein the preset test pattern comprises:
obtaining a test set, wherein the test set comprises a plurality of test samples and real classifications thereof;
for each of the at least one super network module,
responding to each test sample in a plurality of test samples in the test set and inputting the test sample into the neural network model, acquiring the predicted classification of the test sample output by each sub-network of the at least two sub-networks of the super-network module and a second calculation time length corresponding to the sub-network,
and determining one of at least two sub-networks of the hyper-network module as a selected sub-network based on the real classification, the predicted classification and the second calculated time length of each of the plurality of test samples.
6. The training method of claim 1, wherein said obtaining at least two training sets having different recognition difficulty levels comprises:
obtaining a plurality of training samples;
in response to inputting each of the plurality of training samples into a trained hierarchical model, the hierarchical model outputting a predicted classification of the training sample and a confidence thereof;
determining a recognition difficulty level of each of the plurality of training samples using at least one of the prediction classification and the confidence level of the training sample;
and constructing a training set with corresponding recognition difficulty levels based on at least one training sample with the same recognition difficulty level.
7. An image recognition method, comprising:
acquiring a neural network model obtained by training according to the training method of any one of claims 1 to 6, wherein the neural network model comprises a plurality of cascaded network modules;
inputting an image to be recognized into the neural network model;
for any network module in the plurality of cascaded network modules, in response to the network module receiving the image to be recognized or the feature map output by the last network module, the network module outputs the prediction classification and the confidence of the image to be recognized;
and determining the prediction classification as a recognition result in response to the confidence coefficient being greater than a preset threshold value.
8. The identification method of claim 7, the method further comprising:
and for any network module which is not positioned at the output end of the neural network model in the plurality of cascaded network modules, responding to the confidence coefficient smaller than the preset threshold value, and inputting the feature map output by the network module to the next network module.
9. The identification method of claim 8, the method further comprising:
and for the network module at the output end of the neural network model, determining the prediction classification as a recognition result in response to the confidence coefficient being less than the preset threshold value.
10. A training apparatus for a neural network model, wherein the neural network model comprises a plurality of cascaded network modules, the training apparatus comprising:
a first acquisition unit configured to acquire at least two training sets having different recognition difficulty levels;
a first determining unit configured to determine a training set corresponding to each of the plurality of network modules according to a position of the network module in the neural network model, wherein,
aiming at any two network modules in the plurality of network modules, wherein the recognition difficulty level of the training set corresponding to the network module close to the input side of the neural network model is not greater than the recognition difficulty level of the training set corresponding to another network module; and
and the adjusting unit is configured to adjust the parameters of each network module by using the training set corresponding to the network module.
11. The training apparatus of claim 10, wherein the adjustment unit is further configured to:
and aiming at any two network modules in the plurality of network modules, firstly adjusting parameters of the network module close to the input side of the neural network model.
12. The training apparatus according to claim 10 or 11, wherein the plurality of cascaded network modules includes at least one hyper-network module, each of the at least two training sets with different recognition difficulty levels includes a plurality of training samples corresponding to the recognition difficulty levels of the training set and the real classification thereof,
the adjusting unit includes:
the first obtaining subunit is configured to, for each of the at least one super-network module and a training set corresponding to the super-network module, respond to each of a plurality of training samples in the training set and input the neural network model, and obtain a predicted classification and a first calculation duration of the training sample output by the super-network module;
and the adjusting subunit is configured to adjust the parameter of the hyper-network module based on the real classification, the predicted classification and the first calculation duration of each of the plurality of training samples in the training set.
13. The training device of claim 12, wherein the super network module comprises at least two sub-networks, the training device further comprising:
a second determining unit, configured to determine, for each of the at least one super network module, according to a preset testing unit, one of at least two sub-networks of the super network module as a selected sub-network, wherein, for any two super network modules of the at least one super network module, the selected sub-network of the super network module close to the input side of the neural network model is determined first;
a third determining unit configured to determine a trained neural network model based on the selected sub-network of each of the at least one super network module.
14. The training apparatus of claim 13, wherein the predetermined test unit comprises:
a second obtaining subunit configured to obtain a test set, wherein the test set includes a plurality of test samples and real classifications thereof;
a first determining subunit configured for, for each of the at least one super network module,
responding to each test sample in a plurality of test samples in the test set and inputting the test sample into the neural network model, acquiring the predicted classification of the test sample output by each sub-network of the at least two sub-networks of the super-network module and a second calculation time length corresponding to the sub-network,
and determining one of at least two sub-networks of the hyper-network module as a selected sub-network based on the real classification, the predicted classification and the second calculated time length of each of the plurality of test samples.
15. The training apparatus according to claim 10, wherein the first acquisition unit includes:
a third obtaining subunit configured to obtain a plurality of training samples;
a fourth obtaining subunit, configured to, in response to inputting each of the plurality of training samples into a trained hierarchical model, obtain a prediction classification and a confidence thereof, which the hierarchical model outputs, of the training sample;
a second determining subunit configured to determine a recognition difficulty level of each of the plurality of training samples using at least one of the prediction classification and the confidence level of the training sample;
a construction subunit configured to construct a training set having a corresponding recognition difficulty level based on at least one training sample having the same recognition difficulty level.
16. An image recognition apparatus comprising:
the neural network model trained according to the training method of any one of claims 1 to 6, wherein the neural network model comprises a plurality of cascaded network modules;
an input unit configured to input an image to be recognized into the neural network model;
the neural network model is configured to respond to any one of the plurality of cascaded network modules receiving the image to be recognized or the feature map output by the last network module, and the network module outputs the prediction classification and the confidence coefficient of the image to be recognized;
a fourth determination unit configured to determine that the prediction is classified as a recognition result in response to the confidence being greater than a preset threshold.
17. The identification apparatus of claim 16, the fourth determination unit further configured to:
and for any network module which is not positioned at the output end of the neural network model in the plurality of cascaded network modules, responding to the confidence coefficient smaller than the preset threshold value, and inputting the feature map output by the network module to the next network module.
18. A computer device, comprising:
a memory, a processor, and a computer program stored on the memory,
wherein the processor is configured to execute the computer program to implement the steps of the method of any one of claims 1-9.
19. A non-transitory computer readable storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, implements the steps of the method of any of claims 1-9.
20. A computer program product comprising a computer program, wherein the computer program realizes the steps of the method of any one of claims 1-9 when executed by a processor.
CN202110129925.6A 2021-01-29 2021-01-29 Training method and device of neural network model, and image recognition method and device Pending CN112784985A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110129925.6A CN112784985A (en) 2021-01-29 2021-01-29 Training method and device of neural network model, and image recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110129925.6A CN112784985A (en) 2021-01-29 2021-01-29 Training method and device of neural network model, and image recognition method and device

Publications (1)

Publication Number Publication Date
CN112784985A true CN112784985A (en) 2021-05-11

Family

ID=75759985

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110129925.6A Pending CN112784985A (en) 2021-01-29 2021-01-29 Training method and device of neural network model, and image recognition method and device

Country Status (1)

Country Link
CN (1) CN112784985A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113642727A (en) * 2021-08-06 2021-11-12 北京百度网讯科技有限公司 Training method of neural network model and processing method and device of multimedia information
CN114067183A (en) * 2021-11-24 2022-02-18 北京百度网讯科技有限公司 Neural network model training method, image processing method, device and equipment
CN114637896A (en) * 2022-05-23 2022-06-17 杭州闪马智擎科技有限公司 Data auditing method and device, storage medium and electronic device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104077613A (en) * 2014-07-16 2014-10-01 电子科技大学 Crowd density estimation method based on cascaded multilevel convolution neural network
CN106096670A (en) * 2016-06-17 2016-11-09 北京市商汤科技开发有限公司 Concatenated convolutional neural metwork training and image detecting method, Apparatus and system
CN109583501A (en) * 2018-11-30 2019-04-05 广州市百果园信息技术有限公司 Picture classification, the generation method of Classification and Identification model, device, equipment and medium
WO2019085793A1 (en) * 2017-11-01 2019-05-09 腾讯科技(深圳)有限公司 Image classification method, computer device and computer readable storage medium
CN110288084A (en) * 2019-06-06 2019-09-27 北京小米智能科技有限公司 Super-network training method and device
CN110414570A (en) * 2019-07-04 2019-11-05 北京迈格威科技有限公司 Image classification model generating method, device, equipment and storage medium
CN111327608A (en) * 2020-02-14 2020-06-23 中南大学 Application layer malicious request detection method and system based on cascade deep neural network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104077613A (en) * 2014-07-16 2014-10-01 电子科技大学 Crowd density estimation method based on cascaded multilevel convolution neural network
CN106096670A (en) * 2016-06-17 2016-11-09 北京市商汤科技开发有限公司 Concatenated convolutional neural metwork training and image detecting method, Apparatus and system
WO2019085793A1 (en) * 2017-11-01 2019-05-09 腾讯科技(深圳)有限公司 Image classification method, computer device and computer readable storage medium
CN109583501A (en) * 2018-11-30 2019-04-05 广州市百果园信息技术有限公司 Picture classification, the generation method of Classification and Identification model, device, equipment and medium
CN110288084A (en) * 2019-06-06 2019-09-27 北京小米智能科技有限公司 Super-network training method and device
CN110414570A (en) * 2019-07-04 2019-11-05 北京迈格威科技有限公司 Image classification model generating method, device, equipment and storage medium
CN111327608A (en) * 2020-02-14 2020-06-23 中南大学 Application layer malicious request detection method and system based on cascade deep neural network

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113642727A (en) * 2021-08-06 2021-11-12 北京百度网讯科技有限公司 Training method of neural network model and processing method and device of multimedia information
CN113642727B (en) * 2021-08-06 2024-05-28 北京百度网讯科技有限公司 Training method of neural network model and processing method and device of multimedia information
CN114067183A (en) * 2021-11-24 2022-02-18 北京百度网讯科技有限公司 Neural network model training method, image processing method, device and equipment
CN114067183B (en) * 2021-11-24 2022-10-28 北京百度网讯科技有限公司 Neural network model training method, image processing method, device and equipment
CN114637896A (en) * 2022-05-23 2022-06-17 杭州闪马智擎科技有限公司 Data auditing method and device, storage medium and electronic device

Similar Documents

Publication Publication Date Title
CN112784985A (en) Training method and device of neural network model, and image recognition method and device
CN112579909A (en) Object recommendation method and device, computer equipment and medium
CN114743196B (en) Text recognition method and device and neural network training method
CN112559721B (en) Method, device, equipment, medium and program product for adjusting man-machine dialogue system
CN114004985B (en) Character interaction detection method, neural network, training method, training equipment and training medium thereof
CN114445667A (en) Image detection method and method for training image detection model
CN112632380A (en) Training method of interest point recommendation model and interest point recommendation method
CN114861910A (en) Neural network model compression method, device, equipment and medium
CN114924862A (en) Task processing method, device and medium implemented by integer programming solver
CN112784912A (en) Image recognition method and device, and training method and device of neural network model
CN115600646B (en) Language model training method, device, medium and equipment
CN114881170B (en) Training method for neural network of dialogue task and dialogue task processing method
CN115601555A (en) Image processing method and apparatus, device and medium
CN113722594B (en) Training method and device of recommendation model, electronic equipment and medium
CN115797660A (en) Image detection method, image detection device, electronic equipment and storage medium
CN115359309A (en) Training method, device, equipment and medium of target detection model
CN114429678A (en) Model training method and device, electronic device and medium
CN113284484B (en) Model training method and device, voice recognition method and voice synthesis method
CN116842156B (en) Data generation method, device, equipment and medium
CN115512131B (en) Image detection method and training method of image detection model
CN114118379B (en) Neural network training method, image processing method, device, equipment and medium
CN114117046B (en) Data processing method, device, electronic equipment and medium
CN117194027A (en) Service-based data processing method, device, equipment and medium
CN116612200A (en) Image processing method, device, equipment and medium
CN114169440A (en) Model training method, data processing method, device, electronic device and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination