CN109754089B - Model training system and method - Google Patents
Model training system and method Download PDFInfo
- Publication number
- CN109754089B CN109754089B CN201811476117.1A CN201811476117A CN109754089B CN 109754089 B CN109754089 B CN 109754089B CN 201811476117 A CN201811476117 A CN 201811476117A CN 109754089 B CN109754089 B CN 109754089B
- Authority
- CN
- China
- Prior art keywords
- model
- image
- initial
- recognition result
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a model training system and a method, which relate to the field of artificial intelligence of machine learning technology.A controller judges that the difference between a first recognition result and a second recognition result of each first image meets a preset requirement, and stores the first image into a training image set; when the current model training time is judged to be reached, controlling the initial strong model and the second initial weak model to obtain each second image in the training image set; and receiving a third recognition result of each second image in the initial strong model output training image set, taking each third recognition result as supervision information, training the second initial weak model, and updating the first initial weak model. The embodiment of the invention improves the accuracy of the model in an online learning mode. And the problems that the security of the transmission of sample data in the network is threatened and the risk of data leakage exists are solved, and meanwhile, the model is updated without depending on manual intervention, so that the model updating efficiency is higher.
Description
Technical Field
The invention relates to the field of artificial intelligence of machine learning technology, in particular to a model training system and a method.
Background
With the rapid development of artificial intelligence technology in recent years, model training is applied to more and more fields, for example, in the field of image processing, a target classification model is trained in advance, and a target object in an image is classified by using the model.
In the prior art, there is a model training method that trains a model in an off-line manner, and then deploys the trained model to each front-end device in practical application. The method has the problems that the generalization capability of the model obtained by training is limited, and when images which are not in a training set appear in a scene, an accurate result cannot be obtained.
In order to solve some problems existing in offline model training, in the prior art, an offline-online combined model training method is also provided, in which samples with posterior probability smaller than a threshold are stored according to categories by an initial model of offline training and transmitted to a remote server, the received samples are used on the remote server, manually labeled and input to the initial model, the model is trained again, and then a classifier of the current device is updated, so that continuous learning and updating of the model are realized. This method has the following disadvantages: (1) and selecting uncertain difficult samples according to the posterior probability predicted by the initial model, wherein the accuracy of the initial model is limited by equipment computing resources, and the posterior probability is low in reliability, so that the selected samples are not necessarily real difficult samples. (2) And the sample needs to be transmitted to a server outside the system and is marked manually, the security of the transmission of the sample data in the network is threatened, and the risk of data leakage exists. (3) And because the initial model needs to be trained after manual marking, the updating period of the initial model is limited manually, so that the updating period of the model is longer.
Disclosure of Invention
The embodiment of the invention provides a model training system and a method, which are used for solving the problems of low model accuracy, risk of data leakage and long model updating period in the prior art.
The embodiment of the invention provides a model training system, which comprises: the system comprises front-end equipment, back-end equipment and storage equipment, wherein the front-end equipment comprises a first initial weak model, the back-end equipment comprises a second initial weak model, an initial strong model and a controller, and the storage equipment comprises a training image set;
the first initial weak model is used for outputting a first identification result corresponding to each received first image to the controller;
the initial strong model is used for outputting a second identification result corresponding to each received first image to the controller;
the controller is used for judging whether the difference between the first recognition result and the second recognition result of each first image meets a preset requirement or not, and if so, storing the first image into a training image set;
the controller is further configured to control the initial strong model and the second initial weak model to obtain each second image in the training image set when the current model training time is judged to be reached; outputting a third recognition result of each second image in the training image set by using the initial strong model as supervision information, and training the second initial weak model;
the controller is further configured to control the trained second initial weak model to be transmitted to the front-end device, and update the first initial weak model in the front-end device.
Further, the controller is specifically configured to determine that the model training time is currently reached when the number of second images in the training image set is recognized to reach a preset first number threshold.
Further, the initial strong model is further configured to output a fourth recognition result of each second image to the controller, and the second initial weak model outputs a fifth recognition result of each second image to the controller;
the controller is further configured to determine first similarities of fourth recognition results and fifth recognition results of each second image, sort the first similarities, select a preset second number of second images according to the first similarities from large to small, and remove the selected second images from the training image set.
Further, the storage device also comprises a test image set;
the controller is specifically configured to determine, for each first image, whether a difference between a first recognition result and a second recognition result of the first image meets a preset requirement, and if so, store the first image to a training image set with a preset first probability and store the first image to the test image set with a preset second probability.
Further, the controller is further configured to control the initial strong model and the second initial weak model to obtain each third image in the test image set;
the initial strong model is further configured to output a sixth recognition result of each third image to the controller, and the second initial weak model outputs a seventh recognition result of each third image to the controller;
the controller is further configured to determine the accuracy of the second initial weak model according to the sixth recognition result and the seventh recognition result of each third image, and when the accuracy reaches a preset accuracy threshold, it is determined that the training of the second initial weak model is completed.
Further, the controller is further configured to control the initial strong model and the second initial weak model to obtain each third image in the test image set when it is determined that the current model training time is reached;
the initial strong model is further configured to output an eighth recognition result of each third image to the controller, and the second initial weak model outputs a ninth recognition result of each third image to the controller;
the controller is further configured to determine second similarities of the eighth recognition result and the ninth recognition result of each third image, sort the second similarities, select a third number of preset third images from large to small according to the second similarities, and remove the selected third images from the test image set.
In another aspect, an embodiment of the present invention provides a model training method, where the method includes:
receiving a first recognition result corresponding to each first image output by a first initial weak model in front-end equipment and a second recognition result corresponding to each first image output by an initial strong model in back-end equipment;
for each first image, judging whether the difference between the first recognition result and the second recognition result of the first image meets the preset requirement, if so, saving the first image to a training image set;
when the current model training time is judged, controlling the initial strong model and a second initial weak model in back-end equipment, acquiring each second image in the training image set, outputting a third recognition result of each second image in the training image set by using the initial strong model as supervision information, and training the second initial weak model;
and controlling the trained second initial weak model to be transmitted to the front-end equipment, and updating the first initial weak model in the front-end equipment.
Further, the determining that the current model training time is reached includes:
and when the number of the second images in the training image set reaches a preset first number threshold value, determining the current model training time.
Further, after the current model training time is judged, the initial strong model and the second initial weak model in the backend device are controlled, and before each second image in the training image set is acquired, the method further includes:
controlling the initial strong model and a second initial weak model in back-end equipment, acquiring each second image in the training image set, and receiving a fourth recognition result of each second image output by the initial strong model and a fifth recognition result of each second image output by the second initial weak model;
and determining first similarity of the fourth recognition result and the fifth recognition result of each second image, sequencing each first similarity, selecting a preset second number of second images according to the first similarity from large to small, and removing the selected second images from the training image set.
Further, the saving the first image to the training image set comprises:
and storing the first image to a training image set according to a preset first probability and storing the first image to a test image set according to a preset second probability.
Further, the process of determining whether the second initial weak model is trained completely includes:
controlling the initial strong model and the second initial weak model to obtain each third image in the test image set;
receiving a sixth recognition result of each third image output by the initial strong model, and outputting a seventh recognition result of each third image by the second initial weak model;
and determining the accuracy of the second initial weak model according to the sixth recognition result and the seventh recognition result of each third image, and determining that the training of the second initial weak model is finished when the accuracy reaches a preset accuracy threshold.
Further, after the model training time is currently reached, the method further includes:
controlling the initial strong model and the second initial weak model to obtain each third image in the test image set;
receiving an eighth recognition result of each third image output by the initial strong model and a ninth recognition result of each third image output by the second initial weak model;
and determining second similarity of the eighth recognition result and the ninth recognition result of each third image, sequencing each second similarity, selecting a preset third number of third images according to the second similarities from large to small, and removing the selected third images from the test image set.
The embodiment of the invention provides a model training system and a method, wherein the system comprises: the system comprises front-end equipment, back-end equipment and storage equipment, wherein the front-end equipment comprises a first initial weak model, the back-end equipment comprises a second initial weak model, an initial strong model and a controller, and the storage equipment comprises a training image set; the first initial weak model is used for outputting a first identification result corresponding to each received first image to the controller; the initial strong model is used for outputting a second identification result corresponding to each received first image to the controller; the controller is used for judging whether the difference between the first recognition result and the second recognition result of each first image meets a preset requirement or not, and if so, storing the first image into a training image set; the controller is further configured to control the initial strong model and the second initial weak model to obtain each second image in the training image set when the current model training time is judged to be reached; outputting a third recognition result of each second image in the training image set by using the initial strong model as supervision information, and training the second initial weak model; the controller is further configured to control the trained second initial weak model to be transmitted to the front-end device, and update the first initial weak model in the front-end device.
In the embodiment of the invention, an initial strong model with high accuracy is trained in an offline environment to serve as a teacher model, a second initial weak model with certain accuracy is trained in the offline environment to serve as a student model, the second initial weak model continues to learn to the teacher model based on images received online in the online environment, and then the trained second initial weak model is adopted to update the first initial weak model in the front-end equipment, so that the problem that the accuracy of the first initial weak model of the front-end equipment is low is solved, and the accuracy is improved in an online learning mode. And when the second initial weak model is trained, the result output by the initial strong model is used as supervision information for training, and the image does not need to be transmitted to the outside of the system for manual marking, so that the problems that the security of the transmission of sample data in the network is threatened and the risk of data leakage exists are avoided, and meanwhile, the model is updated without depending on manual intervention, so that the model updating efficiency is higher.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a model training system provided in embodiment 1 of the present invention;
FIG. 2 is a schematic diagram of a model training system and process provided in embodiment 5 of the present invention;
fig. 3 is a schematic diagram of a model training process provided in embodiment 6 of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the attached drawings, and it should be understood that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1:
fig. 1 is a schematic structural diagram of a model training system provided in an embodiment of the present invention, where the system includes: the image processing method comprises a front-end device 11, a back-end device 12 and a storage device 13, wherein the front-end device 11 comprises a first initial weak model 111, the back-end device 12 comprises a second initial weak model 121, an initial strong model 122 and a controller 123, and the storage device 13 comprises a training image set;
the first initial weak model 111 is configured to, for each received first image, output a first recognition result corresponding to the first image to the controller 123;
the initial strong model 122 is configured to, for each received first image, output a second recognition result corresponding to the first image to the controller 123;
the controller 123 is configured to determine, for each first image, whether a difference between a first recognition result and a second recognition result of the first image meets a preset requirement, and if so, store the first image into a training image set;
the controller 123 is further configured to control the initial strong model 122 and the second initial weak model 121 to obtain each second image in the training image set when it is determined that the current model training time is reached; outputting a third recognition result of each second image in the training image set by using the initial strong model 122 as supervision information, and training the second initial weak model 121;
the controller 123 is further configured to control the trained second initial weak model 121 to be transmitted to the front-end device 11, so as to update the first initial weak model 111 in the front-end device 11.
The initial model is trained in the model training system in advance, when the initial model is trained, a strong model and a weak model are trained by using data of various complex scenes, the strong model has higher computational complexity, occupies larger storage space and higher accuracy, the weak model has smaller computational complexity and occupies smaller storage space, and the accuracy is lower than that of the strong model. The strong model can be a directly trained model or a model group formed by combining a plurality of individually trained weak models to form a strong model. In the embodiment of the present invention, the initially trained weak model is configured in the front-end device as the first initial weak model 111 in the front-end device 11. The initially trained weak model and strong model are configured at the back-end device 12 as a second initial weak model 121 and an initial strong model 122 in the back-end device.
The model training system also comprises an image acquisition device for acquiring images, and the image acquisition device can independently exist in the model training system and can also be integrated into the front-end device, so that the front-end device can have the function of acquiring images. In the embodiment of the invention, the image acquired by the front-end equipment is used as the first image. In addition, after the front-end device 11 acquires the image, the image may be preprocessed, including but not limited to image decoding processing, image ISP processing, and the like.
The first image acquired by the front-end device 11 is input into the first initial weak model 111, and the first initial weak model 111 may output, for each received first image, a first recognition result corresponding to the first image, and then transmit the first recognition result corresponding to the first image to the controller 123. In addition, the first image acquired by the front-end device 11 is also input into the initial strong model 122, and the initial strong model 122 may output, for each received first image, a second recognition result corresponding to the first image, and then transmit the second recognition result corresponding to the first image to the controller 123.
The controller 123 determines, for each first image, whether a difference between a first recognition result and a second recognition result of the first image satisfies a preset requirement, where the preset requirement requires that the difference between the first recognition result and the second recognition result is large. Also, in the embodiment of the present invention, the recognition result may be, but is not limited to, a target object classification result, a target object detection result, and a target object segmentation result. If the recognition result is the target object classification result, the preset requirement may be that the classification categories of the first recognition result and the second recognition result are inconsistent. If the recognition result is the target object segmentation result, the preset requirement may be that the segmentation areas of the first recognition result and the second recognition result are different greatly, and the like.
Moreover, in the embodiment of the present invention, the model training system may include at least two front-end devices 11, and the result output by the first initial weak model in each front-end device 11 may be different, for example, the model training system includes three front-end devices 11, and the first initial weak models in the three front-end devices 11 respectively output the target object classification result, the target object detection result, and the target object segmentation result.
The controller 123 stores each first image to the training image set when it is determined that a difference between the first recognition result and the second recognition result of the first image satisfies a predetermined requirement.
The timing of the model training, such as 8 am each day, may be preset in the controller 123. The controller 123 may be provided with a timer, and when it is determined that the current time is the preset time for model training, the model training is started. Specifically, the initial strong model 122 and the second initial weak model 121 are controlled first to obtain each second image in the training image set, and it should be noted that, in the embodiment of the present invention, the images in the training image set are used as the second images. The initial strong model 122 obtains each second image, and then may output a third recognition result of each second image to the controller 123, the second initial weak model 121 obtains each second image, and also outputs a recognition result of each second image to the controller 123, and the controller 123 trains the second initial weak model by using the third recognition result of each second image output by the initial strong model 122 as supervision information. Specifically, the controller 123 may determine the accuracy of the second initial weak model 121 by comparing the third recognition result of each second image output by the initial strong model 122 with the recognition result of each second image output by the second initial weak model 121, may change the accuracy of the second initial weak model 121 by adjusting the training parameters of the second initial weak model 121, and when the accuracy of the second initial weak model 121 meets the requirement, for example, the accuracy reaches 90%, it is determined that the training of the second initial weak model 121 is completed.
After the training of the second initial weak model 121 is completed, the controller 123 controls the trained second initial weak model 121 to be transmitted to the front-end device 11, and updates the first initial weak model in the front-end device 11, so that the accuracy of the updated first initial weak model in the front-end device 11 is higher. In the subsequent application, the acquired image is directly input into the updated first initial weak model in the front-end device 11, so that an accurate recognition result can be output.
The embodiment of the invention discloses a model training system of a Teacher-Student mode, which is characterized in that an initial strong model with high accuracy is trained in an offline environment to serve as a Teacher model, a second initial weak model with certain accuracy is trained in the offline environment to serve as a Student model, the second initial weak Student model continues to learn to the Teacher model based on an online received image in an online environment, and then the trained second initial weak model is adopted to update a first initial weak model in front-end equipment, so that the problem that the accuracy of the first initial weak model of the front-end equipment is low is solved, and the accuracy is improved in an online learning mode. And when the second initial weak model is trained, the result output by the initial strong model is used as supervision information for training, and the image does not need to be transmitted to the outside of the system for manual marking, so that the problems that the security of the transmission of sample data in the network is threatened and the risk of data leakage exists are avoided, and meanwhile, the model is updated without depending on manual intervention, so that the model updating efficiency is higher.
In addition, in the embodiment of the invention, the initial strong model is trained based on complex and various scenes, and then the initial strong model teaches the initial weak model to learn online, so that the generalization capability of the initial weak model can be improved to a certain extent, and the overfitting problem generated after the initial weak model is continuously learned online can be prevented.
Example 2:
on the basis of the foregoing embodiment, in the embodiment of the present invention, the controller 123 is specifically configured to determine that the model training time is currently reached when the number of second images in the training image set is recognized to reach a preset first number threshold.
In order to ensure that the accuracy of the trained second initial weak model 121 is higher, in the embodiment of the present invention, when the number of second images in the training image set reaches the preset first number threshold, the controller 123 determines that the model training moment is currently reached. The second initial weak model 121 is then trained. The reason for this is: the original training images are originally stored in the training image set, and the controller 123 subsequently controls the images stored in the training image set to be the images with a larger difference between the second recognition result output by the initial strong model 122 and the first recognition result output by the second initial weak model 121, that is, the images that cannot be accurately recognized by the second initial weak model 121. When the number of images in the training image set, which cannot be accurately recognized by the second initial weak model 121, is large, it can be ensured that the training of the second initial weak model 121 is more accurate by using the images in the training image set and using the third recognition result output by the initial strong model 122 as the supervision information.
For example, if the number of original training images originally stored in the training image set is 1000, the preset first number threshold may be 1100, and when 1100 training images are stored in the training image set, it indicates that 100 training images in the training image set are difficult to be identified by the second initial weak model 121. The second initial weak model 121 is then trained.
In addition, the controller 123 may also determine that the model training time is currently reached when the increased number of the second images in the training image set reaches the preset ratio threshold.
For example, if the number of original training images originally stored in the training image set is 1000 and the preset ratio threshold is 10%, the model training time is reached when 1100 training images are stored in the training image set.
Example 3:
in order to avoid the problem of long training time and reduce the redundancy of the training image set, on the basis of the above embodiments, in the embodiment of the present invention, the initial strong model 122 is further configured to output the fourth recognition result of each second image to the controller 123, and the second initial weak model 121 outputs the fifth recognition result of each second image to the controller 123;
the controller 123 is further configured to determine first similarities of the fourth recognition result and the fifth recognition result of each second image, sort each first similarity, select a preset second number of second images according to the first similarities from large to small, and remove the selected second images from the training image set.
In the embodiment of the present invention, when the control device 123 determines that the model training time is currently reached, the initial strong model 122 and the second initial weak model 121 are controlled to acquire each second image in the training image set. The initial strong model 122 may output the fourth recognition result of each second image to the controller 123 after acquiring each second image, and the second initial weak model 121 may output the fifth recognition result of each second image to the controller 123 after acquiring each second image.
The controller 123 determines a first similarity of the fourth recognition result and the fifth recognition result of each second image, and then ranks each first similarity, where the first similarities may be ranked from large to small, or from small to large. And after sorting, selecting a preset second number of second images according to the first similarity from large to small, and removing the training image set from the selected second images. The preset second number may be 50, 80, 100, etc.
In the embodiment of the present invention, the controller 123 selects a preset second number of second images according to the first similarity from large to small, and the similarity of the recognition result corresponding to the selected images is very high, which may be considered that the images selected from the training image set have a very small meaning for training the second initial weak model, and as the number of images in the training image set increases, the training time is too long, so that the selected second images are removed from the training image set. Moreover, the redundancy of the training image set can also be reduced by removing the training image set from the selected second image.
Example 4:
on the basis of the above embodiments, in the embodiment of the present invention, the storage device 13 further includes a test image set;
the controller 123 is specifically configured to determine, for each first image, whether a difference between a first recognition result and a second recognition result of the first image meets a preset requirement, and if so, save the first image to a training image set with a preset first probability, and save the first image to the test image set with a preset second probability.
The controller 123 is further configured to control the initial strong model 122 and the second initial weak model 121 to obtain each third image in the test image set;
the initial strong model 122 is further configured to output a sixth recognition result of each third image to the controller 123, and the second initial weak model 121 outputs a seventh recognition result of each third image to the controller 123;
the controller 123 is further configured to determine an accuracy of the second initial weak model according to the sixth recognition result and the seventh recognition result of each third image, and when the accuracy reaches a preset accuracy threshold, it is determined that the training of the second initial weak model 121 is completed.
Also included in the memory device 13 is a set of test images for verifying whether the second initial weak model is trained. In the embodiment of the present invention, when the controller 123 determines, for each first image, that a difference between the first recognition result and the second recognition result of the first image satisfies a predetermined requirement, the first image is saved to the training image set with a predetermined first probability P1, and is saved to the test image set with a predetermined second probability P2. The preset first probability P1 may be 20%, and the preset second probability P2 may be 60%. Preferably, the sum of the preset first probability P1 and the preset second probability P2 is 1, for example, the preset first probability P1 may be 30%, and the preset second probability P2 is 70%.
The process of using the test image set to verify whether the second initial weak model is trained is described below.
When the accuracy of the second initial weak model reaches a required level based on the second image in the training image set, for example, the accuracy of the second initial weak model reaches 90% based on the second image in the training image set, at this time, it is checked whether the training of the second initial weak model is completed based on the test image set.
Specifically, the controller 123 controls the initial strong model 122 and the second initial weak model 121 to acquire each third image in the test image set. It should be noted that, in the embodiment of the present invention, an image in the test image set is taken as the third image. The initial strong model 122 obtains each third image, and then may output a sixth recognition result of each third image to the controller 123, the second initial weak model 121 obtains each third image, and also outputs a seventh recognition result of each third image to the controller 123, and the controller 123 may determine the accuracy of the second initial weak model 121 by comparing the sixth recognition result of each third image output by the initial strong model 122 with the seventh recognition result of each third image output by the second initial weak model 121. Specifically, the controller 123 may determine the similarity between the sixth recognition result and the seventh recognition result of each image, and when the similarity reaches a certain threshold, the second initial weak model 121 is considered to be output correctly, otherwise, the second initial weak model 121 is considered to be output incorrectly, and thus, the accuracy of the second initial weak model 121 may be determined. When the accuracy of the second initial weak model 121 reaches a preset accuracy threshold, it is determined that the training of the second initial weak model 121 is completed. The preset accuracy threshold may be 95%, 98%, etc.
Example 5:
in order to avoid the problem of long testing time and reduce the redundancy of the test image set, on the basis of the foregoing embodiments, in an embodiment of the present invention, the controller 123 is further configured to control the initial strong model 122 and the second initial weak model 121 to obtain each third image in the test image set when it is determined that the current model training time is reached;
the initial strong model 122 is further configured to output an eighth recognition result of each third image to the controller 123, and the second initial weak model 121 outputs a ninth recognition result of each third image to the controller 123;
the controller 123 is further configured to determine second similarities of the eighth recognition result and the ninth recognition result of each third image, sort the second similarities, select a third number of preset third images from large to small according to the second similarities, and remove the selected third images from the test image set.
In the embodiment of the present invention, when the control device 123 determines that the model training time is currently reached, the initial strong model 122 and the second initial weak model 121 are controlled to obtain each third image in the test image set. The initial strong model 122 may output an eighth recognition result of each third image to the controller 123 after acquiring each third image, and the second initial weak model 121 may output a ninth recognition result of each third image to the controller 123 after acquiring each third image.
The controller 123 determines second similarities of the eighth recognition result and the ninth recognition result of each third image, and then ranks each second similarity, where the second similarities may be ranked from large to small, or from small to large. And after sorting, selecting a preset third number of third images according to the second similarity from large to small, and removing the training image set from the selected third images. Wherein the preset third number may be the same as or different from the preset second number, and the preset third number may be 60, 80, 100, and so on.
In the embodiment of the present invention, the controller 123 selects a preset third number of third images from large to small according to the second similarity, and the similarity of the identification result corresponding to the selected third images is very high, which may be considered that the image selected from the test image set has a very small meaning for inspecting the second initial weak model, and as the number of the images in the test image set increases, the test time is too long, so that the selected third image is removed from the test image set. Moreover, the redundancy of the test image set can be reduced by removing the selected third image from the test image set.
Fig. 2 is a schematic diagram of a model training system and a process according to an embodiment of the present invention, as shown in fig. 2, a first image acquired by a front-end device is input to a first initial weak model, and the first initial weak model outputs a first recognition result to a controller in a back-end device. The first image collected by the front-end equipment is input into an initial strong model in the back-end equipment, and the initial strong model outputs a second recognition result to the controller. The controller judges whether the difference between the first recognition result and the second recognition result of the first image meets a preset requirement, if so, the first image is stored in a test image set or a training image set, if not, the first image is discarded, and then the training image set and the test image set are updated. And judging whether the model training is executed at present, if so, loading a second image in the training image set, and training a second initial weak model. A third image in the test image set is then loaded and the second initial weak model is examined. And judging whether the second initial weak model reaches a preset index, if not, discarding the second initial weak model, if so, transmitting the second initial weak model to the front-end equipment, and updating the first initial weak model in the front-end equipment on line.
Example 6:
fig. 3 is a schematic diagram of a model training process provided in the embodiment of the present invention, where the process includes the following steps:
s101: receiving a first recognition result corresponding to each first image output by a first initial weak model in the front-end equipment, and receiving a second recognition result corresponding to each first image output by an initial strong model in the back-end equipment.
S102: and judging whether the difference between the first recognition result and the second recognition result of each first image meets the preset requirement, and if so, saving the first images to a training image set.
S103: and when the current model training time is judged, controlling the initial strong model and a second initial weak model in the back-end equipment, acquiring each second image in the training image set, outputting a third recognition result of each second image in the training image set by using the initial strong model as supervision information, and training the second initial weak model.
S104: and controlling the trained second initial weak model to be transmitted to the front-end equipment, and updating the first initial weak model in the front-end equipment.
The method comprises the steps of training an initial model in advance before model training, training a strong model and a weak model by using data of various complex scenes when the initial model is trained, wherein the strong model has higher computational complexity, occupies larger storage space and has higher accuracy, and the weak model has smaller computational complexity and occupies smaller storage space but has lower accuracy than the strong model. The strong model can be a directly trained model or a model group formed by combining a plurality of individually trained weak models to form a strong model. In the embodiment of the invention, the initially trained weak model is configured in the front-end equipment as the first initial weak model. And configuring the initially trained weak model and strong model to the back-end equipment as a second initial weak model and an initial strong model.
The image acquisition equipment is used for acquiring images, and can independently exist in the model training system and also be integrated into the front-end equipment, so that the front-end equipment can have the function of acquiring images. In the embodiment of the invention, the image acquired by the front-end equipment is used as the first image. In addition, after the front-end device acquires the image, the image may be preprocessed, including but not limited to image decoding processing, image ISP processing, and the like.
The first images acquired by the front-end equipment are input into a first initial weak model, the first initial weak model can output a first recognition result corresponding to each received first image, and then the first recognition result corresponding to each first image is transmitted to the controller. In addition, the first image acquired by the front-end device is also input into the initial strong model, and the initial strong model can output a second recognition result corresponding to each received first image, and then transmit the second recognition result corresponding to the first image to the controller.
The controller judges whether the difference between the first recognition result and the second recognition result of each first image meets a preset requirement, wherein the preset requirement needs to enable the difference between the first recognition result and the second recognition result to be large. Also, in the embodiment of the present invention, the recognition result may be, but is not limited to, a target object classification result, a target object detection result, and a target object segmentation result. If the recognition result is the target object classification result, the preset requirement may be that the classification categories of the first recognition result and the second recognition result are inconsistent. If the recognition result is the target object segmentation result, the preset requirement may be that the segmentation areas of the first recognition result and the second recognition result are different greatly, and the like.
Furthermore, in the embodiment of the present invention, the model training system may include at least two front-end devices, and the result output by the first initial weak model in each front-end device may be different, for example, the model training system includes three front-end devices, and the first initial weak models in the three front-end devices respectively output the target object classification result, the target object detection result, and the target object segmentation result.
The controller is used for saving each first image to a training image set when judging that the difference between the first recognition result and the second recognition result of the first image meets the preset requirement.
The time of model training may be preset in the controller, for example, 8 am each day. The controller may be configured with a timer, and when the current time is determined to be the preset time for model training, the model training is started. Specifically, an initial strong model and a second initial weak model are controlled first, and each second image in a training image set is acquired. The initial strong model acquires each second image, then a third recognition result of each second image can be output to the controller, the second initial weak model acquires each second image, and also outputs the recognition result of each second image to the controller, and the controller trains the second initial weak model by taking the third recognition result of each second image output by the initial strong model as supervision information. Specifically, the controller may determine the accuracy of the second initial weak model by comparing the third recognition result of each second image output by the initial strong model with the recognition result of each second image output by the second initial weak model, may change the accuracy of the second initial weak model by adjusting the training parameters of the second initial weak model, and when the accuracy of the second initial weak model meets the requirement, for example, the accuracy reaches 90%, it is determined that the training of the second initial weak model is completed.
After the second initial weak model training is completed, the controller controls the trained second initial weak model to be transmitted to the front-end equipment, and the first initial weak model in the front-end equipment is updated, so that the accuracy of the updated first initial weak model in the front-end equipment is higher. And when the image recognition method is applied subsequently, the acquired image is directly input into the updated first initial weak model in the front-end equipment, so that an accurate recognition result can be output.
The embodiment of the invention discloses a model training system of a Teacher-Student mode, which is characterized in that an initial strong model with high accuracy is trained in an offline environment to serve as a Teacher model, a second initial weak model with certain accuracy is trained in the offline environment to serve as a Student model, the second initial weak Student model continues to learn to the Teacher model based on an online received image in an online environment, and then the trained second initial weak model is adopted to update a first initial weak model in front-end equipment, so that the problem that the accuracy of the first initial weak model of the front-end equipment is low is solved, and the accuracy is improved in an online learning mode. And when the second initial weak model is trained, the result output by the initial strong model is used as supervision information for training, and the image does not need to be transmitted to the outside of the system for manual marking, so that the problems that the security of the transmission of sample data in the network is threatened and the risk of data leakage exists are avoided, and meanwhile, the model is updated without depending on manual intervention, so that the model updating efficiency is higher.
In addition, in the embodiment of the invention, the initial strong model is trained based on complex and various scenes, and then the initial strong model teaches the initial weak model to learn online, so that the generalization capability of the initial weak model can be improved to a certain extent, and the overfitting problem generated after the initial weak model is continuously learned online can be prevented.
Example 7:
on the basis of the foregoing embodiment, in the embodiment of the present invention, the determining that the current model training time arrives includes:
and when the number of the second images in the training image set reaches a preset first number threshold value, determining the current model training time.
In order to ensure that the accuracy of the trained second initial weak model is higher, in the embodiment of the invention, when the number of the second images in the recognized training image set reaches the preset first number threshold, the controller determines that the model training moment is reached currently. At this point, the second initial weak model is trained. The reason for this is: the original training images are originally stored in the training image set, and the images stored in the training image set are subsequently controlled by the controller to be images with larger difference between the second recognition result output by the initial strong model and the first recognition result output by the second initial weak model, namely the images which can not be recognized accurately by the second initial weak model. When the number of images which cannot be accurately identified by the second initial weak model in the training image set is large, the images in the training image set can be ensured to be utilized, and the second initial weak model can be trained more accurately by taking a third identification result output by the initial strong model as supervision information.
For example, if the number of original training images originally stored in the training image set is 1000, the preset first number threshold may be 1100, and when 1100 training images are stored in the training image set, it indicates that 100 training images in the training image set are difficult to recognize by the second initial weak model. At this point, the second initial weak model is trained.
In addition, the controller may also determine that the current model training time is reached when the increased number of the second images in the training image set reaches a preset ratio threshold.
For example, if the number of original training images originally stored in the training image set is 1000 and the preset ratio threshold is 10%, the model training time is reached when 1100 training images are stored in the training image set.
Example 8:
in order to avoid the problem of long training time and reduce the redundancy of the training image set, on the basis of the foregoing embodiments, in an embodiment of the present invention, after the current model training time is determined, the method controls the initial strong model and the second initial weak model in the backend device, and before each second image in the training image set is acquired, the method further includes:
controlling the initial strong model and a second initial weak model in back-end equipment, acquiring each second image in the training image set, and receiving a fourth recognition result of each second image output by the initial strong model and a fifth recognition result of each second image output by the second initial weak model;
and determining first similarity of the fourth recognition result and the fifth recognition result of each second image, sequencing each first similarity, selecting a preset second number of second images according to the first similarity from large to small, and removing the selected second images from the training image set.
In the embodiment of the invention, when the control device determines that the current model training time is reached, the initial strong model and the second initial weak model are controlled to obtain each second image in the training image set. After the initial strong model acquires each second image, the fourth recognition result of each second image may be output to the controller, and after the second initial weak model acquires each second image, the fifth recognition result of each second image may be output to the controller.
The controller determines a first similarity of the fourth recognition result and the fifth recognition result of each second image, and then sorts each first similarity, wherein the first similarities may be sorted from large to small or sorted from small to large. And after sorting, selecting a preset second number of second images according to the first similarity from large to small, and removing the training image set from the selected second images. The preset second number may be 50, 80, 100, etc.
In the embodiment of the invention, the controller selects a preset second number of second images according to the first similarity from large to small, the similarity of the corresponding recognition results of the selected images is very high, the significance of the images selected from the training image set to train the second initial weak model is considered to be very small, and as the number of the images in the training image set is more and more, the training time is too long, so that the selected second images are removed from the training image set. Moreover, the redundancy of the training image set can also be reduced by removing the training image set from the selected second image.
Example 9:
on the basis of the above embodiments, in an embodiment of the present invention, the saving the first image to the training image set includes:
and storing the first image to a training image set according to a preset first probability and storing the first image to a test image set according to a preset second probability.
The process of judging whether the second initial weak model is trained comprises the following steps:
controlling the initial strong model and the second initial weak model to obtain each third image in the test image set;
receiving a sixth recognition result of each third image output by the initial strong model, and outputting a seventh recognition result of each third image by the second initial weak model;
and determining the accuracy of the second initial weak model according to the sixth recognition result and the seventh recognition result of each third image, and determining that the training of the second initial weak model is finished when the accuracy reaches a preset accuracy threshold.
The storage device further includes a set of test images for verifying that the second initial weak model is trained. In the embodiment of the present invention, when the controller determines, for each first image, that a difference between the first recognition result and the second recognition result of the first image satisfies a predetermined requirement, the controller saves the first image to the training image set with a predetermined first probability P1, and saves the first image to the test image set with a predetermined second probability P2. The preset first probability P1 may be 20%, and the preset second probability P2 may be 60%. Preferably, the sum of the preset first probability P1 and the preset second probability P2 is 1, for example, the preset first probability P1 may be 30%, and the preset second probability P2 is 70%.
The process of using the test image set to verify whether the second initial weak model is trained is described below.
When the accuracy of the second initial weak model reaches a required level based on the second image in the training image set, for example, the accuracy of the second initial weak model reaches 90% based on the second image in the training image set, at this time, it is checked whether the training of the second initial weak model is completed based on the test image set.
Specifically, the controller controls the initial strong model and the second initial weak model to acquire each third image in the test image set. It should be noted that, in the embodiment of the present invention, an image in the test image set is taken as the third image. The controller can determine the accuracy of the second initial weak model by comparing the sixth recognition result of each third image output by the initial strong model with the seventh recognition result of each third image output by the second initial weak model. Specifically, the controller may determine a similarity between the sixth recognition result and the seventh recognition result of each image, and when the similarity reaches a certain threshold, the second initial weak model is considered to be output correctly, otherwise, the second initial weak model is considered to be output incorrectly, and thus, the accuracy of the second initial weak model may be determined. And when the accuracy of the second initial weak model reaches a preset accuracy threshold, determining that the training of the second initial weak model is finished. The preset accuracy threshold may be 95%, 98%, etc.
Example 10:
in order to avoid the problem of long testing time and reduce the redundancy of the test image set, on the basis of the foregoing embodiments, in an embodiment of the present invention, after the current model training time is determined, the method further includes:
controlling the initial strong model and the second initial weak model to obtain each third image in the test image set;
receiving an eighth recognition result of each third image output by the initial strong model and a ninth recognition result of each third image output by the second initial weak model;
and determining second similarity of the eighth recognition result and the ninth recognition result of each third image, sequencing each second similarity, selecting a preset third number of third images according to the second similarities from large to small, and removing the selected third images from the test image set.
In the embodiment of the invention, when the control device determines that the current model training time is reached, the initial strong model and the second initial weak model are controlled to obtain each third image in the test image set. After the initial strong model acquires each third image, an eighth recognition result of each third image may be output to the controller, and after the second initial weak model acquires each third image, a ninth recognition result of each third image may be output to the controller.
The controller determines second similarities of the eighth recognition result and the ninth recognition result of each third image, and then sorts each second similarity, wherein the second similarities may be sorted from large to small or sorted from small to large. And after sorting, selecting a preset third number of third images according to the second similarity from large to small, and removing the training image set from the selected third images. Wherein the preset third number may be the same as or different from the preset second number, and the preset third number may be 60, 80, 100, and so on.
In the embodiment of the invention, the controller selects a preset third number of third images according to the second similarity from large to small, the similarity of the corresponding identification results of the selected images is very high, the significance of the images selected from the test image set to test the second initial weak model is considered to be very small, and as the number of the images in the test image set is more and more, the test time is too long, so that the selected third images are removed from the test image set. Moreover, the redundancy of the test image set can be reduced by removing the selected third image from the test image set.
The embodiment of the invention provides a model training system and a method, wherein the system comprises: the system comprises front-end equipment, back-end equipment and storage equipment, wherein the front-end equipment comprises a first initial weak model, the back-end equipment comprises a second initial weak model, an initial strong model and a controller, and the storage equipment comprises a training image set; the first initial weak model is used for outputting a first identification result corresponding to each received first image to the controller; the initial strong model is used for outputting a second identification result corresponding to each received first image to the controller; the controller is used for judging whether the difference between the first recognition result and the second recognition result of each first image meets a preset requirement or not, and if so, storing the first image into a training image set; the controller is further configured to control the initial strong model and the second initial weak model to obtain each second image in the training image set when the current model training time is judged to be reached; outputting a third recognition result of each second image in the training image set by using the initial strong model as supervision information, and training the second initial weak model; the controller is further configured to control the trained second initial weak model to be transmitted to the front-end device, and update the first initial weak model in the front-end device.
In the embodiment of the invention, an initial strong model with high accuracy is trained in an offline environment to serve as a teacher model, a second initial weak model with certain accuracy is trained in the offline environment to serve as a student model, the second initial weak model continues to learn to the teacher model based on images received online in the online environment, and then the trained second initial weak model is adopted to update the first initial weak model in the front-end equipment, so that the problem that the accuracy of the first initial weak model of the front-end equipment is low is solved, and the accuracy is improved in an online learning mode. And when the second initial weak model is trained, the result output by the initial strong model is used as supervision information for training, and the image does not need to be transmitted to the outside of the system for manual marking, so that the problems that the security of the transmission of sample data in the network is threatened and the risk of data leakage exists are avoided, and meanwhile, the model is updated without depending on manual intervention, so that the model updating efficiency is higher.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (10)
1. A model training system, the system comprising: the system comprises front-end equipment, back-end equipment and storage equipment, wherein the front-end equipment comprises a first initial weak model, the back-end equipment comprises a second initial weak model, an initial strong model and a controller, and the storage equipment comprises a training image set;
the first initial weak model is used for outputting a first identification result corresponding to each received first image to the controller;
the initial strong model is used for outputting a second identification result corresponding to each received first image to the controller;
the controller is used for judging whether the difference between the first recognition result and the second recognition result of each first image meets a preset requirement or not, and if so, storing the first image into a training image set; if the recognition result is the target object classification result, the preset requirement comprises that the classification categories of the first recognition result and the second recognition result are inconsistent;
the controller is further configured to control the initial strong model and the second initial weak model to obtain each second image in the training image set when the current model training time is judged to be reached; outputting a third recognition result of each second image in the training image set by using the initial strong model as supervision information, and training the second initial weak model;
the controller is further configured to control the trained second initial weak model to be transmitted to the front-end device, and update the first initial weak model in the front-end device;
the storage device also comprises a test image set;
the controller is specifically configured to determine, for each first image, whether a difference between a first recognition result and a second recognition result of the first image meets a preset requirement, and if so, store the first image to a training image set with a preset first probability and store the first image to the test image set with a preset second probability.
2. The system of claim 1, wherein the controller is specifically configured to determine that the model training moment is currently reached when a number of second images in the training image set is identified as reaching a preset first number threshold.
3. The system of claim 1, wherein the initial strong model is further configured to output a fourth recognition result of the each second image to the controller, and the second initial weak model outputs a fifth recognition result of the each second image to the controller;
the controller is further configured to determine first similarities of fourth recognition results and fifth recognition results of each second image, sort the first similarities, select a preset second number of second images according to the first similarities from large to small, and remove the selected second images from the training image set.
4. The system of claim 1, wherein the controller is further configured to control the initial strong model and the second initial weak model to obtain each third image in the set of test images;
the initial strong model is further configured to output a sixth recognition result of each third image to the controller, and the second initial weak model outputs a seventh recognition result of each third image to the controller;
the controller is further configured to determine the accuracy of the second initial weak model according to the sixth recognition result and the seventh recognition result of each third image, and when the accuracy reaches a preset accuracy threshold, it is determined that the training of the second initial weak model is completed.
5. The system of claim 1, wherein the controller is further configured to control the initial strong model and the second initial weak model to obtain each third image in the set of test images when it is determined that a model training time is currently reached;
the initial strong model is further configured to output an eighth recognition result of each third image to the controller, and the second initial weak model outputs a ninth recognition result of each third image to the controller;
the controller is further configured to determine second similarities of the eighth recognition result and the ninth recognition result of each third image, sort the second similarities, select a third number of preset third images from large to small according to the second similarities, and remove the selected third images from the test image set.
6. A method of model training, the method comprising:
receiving a first recognition result corresponding to each first image output by a first initial weak model in front-end equipment and a second recognition result corresponding to each first image output by an initial strong model in back-end equipment;
for each first image, judging whether the difference between the first recognition result and the second recognition result of the first image meets the preset requirement, if so, saving the first image to a training image set; if the recognition result is the target object classification result, the preset requirement comprises that the classification categories of the first recognition result and the second recognition result are inconsistent;
when the current model training time is judged, controlling the initial strong model and a second initial weak model in back-end equipment, acquiring each second image in the training image set, outputting a third recognition result of each second image in the training image set by using the initial strong model as supervision information, and training the second initial weak model;
controlling the trained second initial weak model to be transmitted to the front-end equipment, and updating the first initial weak model in the front-end equipment;
the saving the first image to the training image set comprises:
and storing the first image to a training image set according to a preset first probability and storing the first image to a test image set according to a preset second probability.
7. The method of claim 6, wherein the determining the current arrival model training time comprises:
and when the number of the second images in the training image set reaches a preset first number threshold value, determining the current model training time.
8. The method of claim 6, wherein after determining that the current model training time is reached, controlling the initial strong model and a second initial weak model in a back-end device, and before acquiring each second image in the training image set, the method further comprises:
controlling the initial strong model and a second initial weak model in back-end equipment, acquiring each second image in the training image set, and receiving a fourth recognition result of each second image output by the initial strong model and a fifth recognition result of each second image output by the second initial weak model;
and determining first similarity of the fourth recognition result and the fifth recognition result of each second image, sequencing each first similarity, selecting a preset second number of second images according to the first similarity from large to small, and removing the selected second images from the training image set.
9. The method of claim 6, wherein determining whether the second initial weak model is trained comprises:
controlling the initial strong model and the second initial weak model to obtain each third image in the test image set;
receiving a sixth recognition result of each third image output by the initial strong model, and outputting a seventh recognition result of each third image by the second initial weak model;
and determining the accuracy of the second initial weak model according to the sixth recognition result and the seventh recognition result of each third image, and determining that the training of the second initial weak model is finished when the accuracy reaches a preset accuracy threshold.
10. The method of claim 6, wherein after determining that the current model training time is reached, the method further comprises:
controlling the initial strong model and the second initial weak model to obtain each third image in the test image set;
receiving an eighth recognition result of each third image output by the initial strong model and a ninth recognition result of each third image output by the second initial weak model;
and determining second similarity of the eighth recognition result and the ninth recognition result of each third image, sequencing each second similarity, selecting a preset third number of third images according to the second similarities from large to small, and removing the selected third images from the test image set.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811476117.1A CN109754089B (en) | 2018-12-04 | 2018-12-04 | Model training system and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811476117.1A CN109754089B (en) | 2018-12-04 | 2018-12-04 | Model training system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109754089A CN109754089A (en) | 2019-05-14 |
CN109754089B true CN109754089B (en) | 2021-07-20 |
Family
ID=66402632
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811476117.1A Active CN109754089B (en) | 2018-12-04 | 2018-12-04 | Model training system and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109754089B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111368997B (en) * | 2020-03-04 | 2022-09-06 | 支付宝(杭州)信息技术有限公司 | Training method and device of neural network model |
CN113537483A (en) * | 2020-04-14 | 2021-10-22 | 杭州海康威视数字技术股份有限公司 | Domain adaptation method and device and electronic equipment |
US11847544B2 (en) | 2020-07-21 | 2023-12-19 | International Business Machines Corporation | Preventing data leakage in automated machine learning |
CN113115060B (en) * | 2021-04-07 | 2022-10-25 | 中国工商银行股份有限公司 | Video transmission method, device and system |
CN113595800B (en) * | 2021-08-03 | 2022-07-05 | 腾云悦智科技(深圳)有限责任公司 | Method for automatically discovering application connection relation and preserving CMDB information |
CN113723616B (en) * | 2021-08-17 | 2024-08-06 | 上海智能网联汽车技术中心有限公司 | Multi-sensor information semi-automatic labeling method, system and storage medium |
CN115393652B (en) * | 2022-09-20 | 2023-07-25 | 北京国电通网络技术有限公司 | Artificial intelligence model updating method, identification method and equipment based on countermeasure network |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105404901A (en) * | 2015-12-24 | 2016-03-16 | 上海玮舟微电子科技有限公司 | Training method of classifier, image detection method and respective system |
CN106548190A (en) * | 2015-09-18 | 2017-03-29 | 三星电子株式会社 | Model training method and equipment and data identification method |
CN107305636A (en) * | 2016-04-22 | 2017-10-31 | 株式会社日立制作所 | Target identification method, Target Identification Unit, terminal device and target identification system |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9830526B1 (en) * | 2016-05-26 | 2017-11-28 | Adobe Systems Incorporated | Generating image features based on robust feature-learning |
CN108805185B (en) * | 2018-05-29 | 2023-06-30 | 腾讯科技(深圳)有限公司 | Face recognition method and device, storage medium and computer equipment |
-
2018
- 2018-12-04 CN CN201811476117.1A patent/CN109754089B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106548190A (en) * | 2015-09-18 | 2017-03-29 | 三星电子株式会社 | Model training method and equipment and data identification method |
CN105404901A (en) * | 2015-12-24 | 2016-03-16 | 上海玮舟微电子科技有限公司 | Training method of classifier, image detection method and respective system |
CN107305636A (en) * | 2016-04-22 | 2017-10-31 | 株式会社日立制作所 | Target identification method, Target Identification Unit, terminal device and target identification system |
Also Published As
Publication number | Publication date |
---|---|
CN109754089A (en) | 2019-05-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109754089B (en) | Model training system and method | |
US10835930B2 (en) | Sorting system | |
CN101221623B (en) | Object type on-line training and recognizing method and system thereof | |
CN111814850A (en) | Defect detection model training method, defect detection method and related device | |
CN110472494A (en) | Face feature extracts model training method, facial feature extraction method, device, equipment and storage medium | |
CN111709371A (en) | Artificial intelligence based classification method, device, server and storage medium | |
CN108334815A (en) | Inspection method of power secondary equipment, and switch state identification method and system | |
CN116259075A (en) | Pedestrian attribute identification method based on prompt fine tuning pre-training large model | |
CN110599453A (en) | Panel defect detection method and device based on image fusion and equipment terminal | |
CN113870254B (en) | Target object detection method and device, electronic equipment and storage medium | |
CN112734803A (en) | Single target tracking method, device, equipment and storage medium based on character description | |
CN109086737A (en) | Shipping cargo monitoring video frequency identifying method and system based on convolutional neural networks | |
CN111753929A (en) | Artificial intelligence based classification method, device, terminal and storage medium | |
CN111488939A (en) | Model training method, classification method, device and equipment | |
CN109241893B (en) | Road selection method and device based on artificial intelligence technology and readable storage medium | |
CN116701931B (en) | Water quality parameter inversion method and device, storage medium and electronic equipment | |
CN113392867A (en) | Image identification method and device, computer equipment and storage medium | |
CN114462526B (en) | Classification model training method and device, computer equipment and storage medium | |
CN112149698A (en) | Method and device for screening difficult sample data | |
CN113139932B (en) | Deep learning defect image identification method and system based on ensemble learning | |
CN115410250A (en) | Array type human face beauty prediction method, equipment and storage medium | |
CN115713669A (en) | Image classification method and device based on inter-class relation, storage medium and terminal | |
CN111652083B (en) | Weak supervision time sequence action detection method and system based on self-adaptive sampling | |
CN113076169A (en) | User interface test result classification method and device based on convolutional neural network | |
CN114155420B (en) | Scene recognition model training method, device, equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |