CN114663437A - Deep learning model deployment method, equipment and medium - Google Patents

Deep learning model deployment method, equipment and medium Download PDF

Info

Publication number
CN114663437A
CN114663437A CN202210572773.1A CN202210572773A CN114663437A CN 114663437 A CN114663437 A CN 114663437A CN 202210572773 A CN202210572773 A CN 202210572773A CN 114663437 A CN114663437 A CN 114663437A
Authority
CN
China
Prior art keywords
model
training
deep learning
description file
deployment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210572773.1A
Other languages
Chinese (zh)
Inventor
强伟
余章卫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Zhongke Xingzhi Intelligent Technology Co ltd
Original Assignee
Suzhou Zhongke Xingzhi Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Zhongke Xingzhi Intelligent Technology Co ltd filed Critical Suzhou Zhongke Xingzhi Intelligent Technology Co ltd
Priority to CN202210572773.1A priority Critical patent/CN114663437A/en
Publication of CN114663437A publication Critical patent/CN114663437A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/10Interfaces, programming languages or software development kits, e.g. for simulating neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Quality & Reliability (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a deep learning model deployment method, equipment and a medium, wherein the method comprises the following steps of S100 model training: preprocessing a defect picture acquired from an industrial camera and training a model by using the processed defect picture; s200, model derivation: selecting a deployment frame type, exporting a model and a model description file; s300, model deployment: importing an industrial picture acquired from an industrial camera, loading and reading a model description file, automatically selecting a corresponding inference parameter according to the model description file, and inferring and outputting a detection result through model identification; the model description file is generated when the model is exported, the model description file is directly loaded at the model deployment end, the model inference frame type is automatically matched and selected, manual selection is not needed, and the operation is simple.

Description

Deep learning model deployment method, equipment and medium
Technical Field
The invention relates to the technical field of industrial visual inspection, in particular to a deep learning model deployment method, equipment and medium.
Background
In recent years, the advantage of deep learning in the computer vision direction is more and more obvious, and the industrial vision detection based on the deep learning is more and more emphasized, but the local deployment difficulty is high, and the deep learning can not be easily landed really.
Although cloud deployment is a shortcut, simple and convenient model deployment can be provided for medium-sized and small enterprises and users; however, the problem of serious privacy disclosure exists, and most of enterprises are unwilling to share own data and are difficult to pursue.
As disclosed in patent CN111488197A, it mainly provides simple and convenient model deployment for medium and small enterprises and users, but relies too much on the cloud; the deep learning model deployment method provided by the invention can meet the requirements of deep learning model and resource deployment under the condition of limited hardware resources, is simple to operate, has high efficiency and has better universality.
Disclosure of Invention
The invention aims to provide a deep learning model deployment method, equipment and medium, which are simple to operate, high in efficiency and good in universality.
In order to achieve the purpose of the invention, the technical scheme adopted by the invention is as follows: a deep learning model deployment method comprises the following steps,
s100, model training: preprocessing a defect picture acquired from an industrial camera and training a model by using the processed defect picture;
s200, model derivation: selecting a deployment frame type, exporting a model and a model description file;
s300, model deployment: and importing an industrial picture acquired from an industrial camera, loading and reading the model description file, automatically selecting corresponding inference parameters according to the model description file, and inferring and outputting a detection result through model identification.
Preferably, S100 and S200 are performed sequentially on a software platform.
Preferably, the S100 includes the steps of:
s110, collecting a defect picture and marking the defect to generate a training data set;
s130, dividing the training data set into training data and verification data;
s140, selecting a model, training the model through a training data set, and adjusting training parameters according to a training effect;
s150, selecting a test picture, testing whether the training model meets the requirements, continuing training if the training model does not meet the requirements, and stopping training if the training model meets the requirements to finish model training.
Wherein, the training data set is divided into training data and verification data randomly according to (3-5) 1; preferably, the ratio of training data to validation data is 4: 1.
The training parameters comprise model task types, cycle times, batch sizes, segmentation proportions and learning rates.
Preferably, the S200 includes the steps of:
s210, selecting a type of a deployment frame;
s220, selecting a model format according to the selected deployment frame type and then conducting export operation;
and S230, exporting the model and generating a corresponding model description file.
Wherein, the model is in one-to-one correspondence with the model description file.
Preferably, the model description file contains a model task type, a frame type, an inference model type, a model loading picture length and width size, a loading data arrangement format and the like.
In practical use, the model description file is in a JSON format (e.g., modelinfo. JSON), and model information corresponding to the model is recorded therein, and includes the following contents: task type (e.g., "task _ type": category "), model type (e.g.," model _ type ": precision first"), frame type (e.g., "input _ back _ type": TensorRT "), class information (e.g.," class _ names ": [" cat "," dog "]), input batch size (e.g.," input _ batch _ size (n): 1), "), input channel size (e.g.," input _ img _ channel (c): 3), input height (e.g., "input _ img _ height (h)": 224), input width (e.g., "input _ img _ width (w)": 224), and input data arrangement order (e.g., "input _ data _ tar": hw "). Preferably, the S300 includes the steps of:
s310, shooting through an industrial camera to obtain an industrial picture;
s320, loading an industrial picture and a model description file;
s330, reading the model description file, and automatically selecting corresponding inference parameters according to the model description file;
s340, loading the derived model and selectively setting an inference parameter;
and S350, after the model output data is deduced and obtained, displaying a final detection result on the industrial picture.
The inference parameters comprise a model task type, a frame type, an inference model type, a model loading picture length and width size, a loading data arrangement format, a score threshold value and whether a GPU is used.
In step S340, the inferred parameters (score threshold, whether GPU is used or not) that have been automatically selected are adjusted again.
In actual use, the model description file in the JSON format is parsed by calling the JSON library in c + +, and model information ("task _ type", "model _ type", "input _ back _ type", "class _ names", "input _ batch _ size (n)", input _ img _ channel (c) "," input _ img _ height (h) "," input _ img _ width (w) ", and" input _ arr ") of the corresponding model are read, and inference is selected in QT interface matching according to the corresponding information content (" classification "," precision priority "," TensorRT ",[" cat "," dog "], 1, 3, 224, and" NCHW ").
"task type", "model type", "inference frame type", "category", "input batch size", "input channel size", "input height", "input width", "input data arrangement order", the above parameters are read directly from the model specification file.
Wherein "score threshold", "whether GPU is used" are default parameters that support secondary modification, selection.
Wherein the S350 comprises the following steps,
s351, starting to infer after loading an inference model, and acquiring model output data;
s352, post-processing the model output data to obtain detection data, wherein the detection data comprises information such as detection categories, detection frame coordinates, detection frame areas and detection scores;
s353, filtering the detection data to obtain a detection result;
and S354, displaying the final detection result on the input image.
And S355, transmitting the detection result to a subsequent module.
Preferably, the following steps are further included between S110 and S130: and S120, amplifying the training data set.
Preferably, the S200 further comprises the steps of: s240, encrypting the derived model.
The invention also claims a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor executes the computer program and executes the instructions of the deep learning model deployment method.
The present invention also claims a computer-readable storage medium storing a computer program which, when executed by a processor, executes instructions of the deep learning model deployment method described above.
Due to the application of the technical scheme, compared with the prior art, the invention has the following advantages:
1. the invention provides a deep learning model deployment method, equipment and medium, wherein a model description file is generated simultaneously when a model is exported, the model description file is directly loaded at a model deployment end, the model is automatically matched and selected to infer the frame type, manual selection and input are not needed, and the operation is simple;
2. the invention provides a deep learning model deployment method, equipment and medium, which can meet the requirements of deep learning model and resource deployment under the condition of limited hardware resources without connecting to a cloud, and have the advantages of simple operation, high efficiency and better universality;
3. the invention provides a method, equipment and medium for deploying a deep learning model, wherein the model training and exporting are sequentially carried out on the same software platform; a data marking, model training and model exporting system platform based on Python and PyQt is built, and graphical interface operation is facilitated; the model deployment platform based on C + + and Qt has the advantages of high inference speed and good stability, and better meets the requirements of industrial fields.
Drawings
FIG. 1 is an overall framework diagram of a deep learning model deployment method of the present invention;
FIG. 2 is a flow chart of model training of the present invention;
FIG. 3 is a flow chart of model derivation according to the present invention;
FIG. 4 is a flow chart of model deployment of the present invention.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present specification, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only a part of the embodiments of the present specification, and not all of the embodiments. All other embodiments obtained by a person skilled in the art based on the embodiments in the present specification without any inventive step should fall within the scope of protection of the present specification.
Example 1
Referring to fig. 1 and fig. 2, this embodiment mainly introduces a Python-end model training method, which pre-processes a defect picture obtained from an industrial camera and trains a model by using the processed defect picture;
specifically, the method comprises the following steps of,
and S110, collecting a defect picture and marking the defect to generate a training data set.
S120, amplifying the training data set; defect types are added, and a training data set is added.
S130, dividing the training data set into training data and verification data;
specifically, a training data set is randomly divided into training data and verification data according to a proportion;
wherein the ratio of training data to validation data is 4: 1.
S140, selecting a model, training the model through a training data set, and adjusting training parameters according to a training effect;
wherein the training parameters comprise model task type, cycle number, batch size, segmentation proportion, learning rate and the like, and then the model starts to be trained.
S150, selecting a test picture, testing whether the training model meets the requirements, continuing training if the training model does not meet the requirements, and stopping training if the training model meets the requirements to finish model training;
if the defects in the test picture can be detected, the requirements can be considered to be met.
Specifically, the S110 includes the following steps,
s111, acquiring a related defect picture through an industrial camera;
s112, acquiring a software installation package, and installing software; opening software, creating a project type according to a model task type (image classification, target positioning, image semantic segmentation and OCR), setting a project storage position, and loading a defect picture;
s113, classifying and labeling the loaded defective pictures;
wherein, for marking, a red frame can be used for marking the defect part.
Example 2
Referring to fig. 1 and fig. 3, the present embodiment mainly introduces a Python end model export method, which includes selecting a deployment framework type, exporting a model and a model specification file; specifically, the method comprises the following steps of,
s210, selecting a type of a deployment frame;
s220, selecting a model format according to the selected deployment frame type and then conducting export operation;
if planning to deploy with a LibTorch framework, exporting a model format selection PT in the setting; if the plan is deployed with a TensorRT framework, exporting a model format in the setting and selecting ONNX;
s230, generating a corresponding model description file while exporting the model;
the model specification file includes: model task type (like classification, positioning, segmentation or OCR), framework type (LibTorch, TensorRT), inference model type (precision first, equalization, speed first), model loading picture size, loading data arrangement format (NCHW, NHWC).
S240, encrypting the derived model.
Example 3
Referring to fig. 1, fig. 3 and fig. 4, the present embodiment mainly introduces a model part method of a C + + terminal, imports an industrial picture obtained from an industrial camera, automatically selects a model deployment frame type, and infers and outputs a detection result through model identification: specifically, the method comprises the following steps of,
s310, shooting through an industrial camera to obtain an industrial picture.
And S320, loading the industrial picture and the model description file.
S330, reading the model description file, and automatically selecting corresponding inference parameters according to the model description file;
specifically, the relevant information of the model (the task type of the model, the frame type, the type of the inferred model, the length and width of the loaded picture of the model and the arrangement format of the loaded data) in the model description file is read, and the corresponding inferred parameters are automatically set according to the model description file.
S340, loading the derived model and selectively setting the inference parameters.
And manually setting a score threshold value in the inference parameters, whether the GPU is used and the like.
S350, after the model output data is deduced and obtained, the final detection result is displayed on the industrial picture, S350 comprises the following steps,
s351, starting to infer after loading an inference model, and acquiring model output data;
s352, post-processing the model output data to obtain detection data, wherein the detection data comprises information such as detection categories, detection frame coordinates, detection frame areas and detection scores;
s353, filtering the detection data to obtain a detection result;
and S354, displaying the final detection result on the input image.
And S355, transmitting the detection result to a subsequent module.
Example 4
Referring to fig. 1, fig. 2, fig. 3 and fig. 4, this embodiment mainly describes a specific operation step of a deep learning model deployment method of the present invention, which includes the following steps:
step 1, acquiring a software installation package, and installing software;
step 2, opening software, creating a project type according to a model task type (image classification, target positioning, image semantic segmentation and OCR), setting a project storage position, and loading a defect picture prepared in advance;
step 3, classifying, labeling, data amplification and other operations are carried out on the loaded defect pictures;
step 4, setting training parameters including model task types, cycle times, batch sizes, segmentation proportions, learning rates and the like, and then starting to train the model;
step 5, carrying out inference verification on the model by using a test picture prepared in advance;
and 6, setting a export model format (PT or ONNX), exporting the model after exporting the model storage path, exporting the model, and exporting a model description file while exporting the model for next inference deployment.
Step 7, acquiring a software installation package of the trained model, and installing deployment software;
step 8, after software is opened, firstly adding a data acquisition module (comprising a camera image, a local image and the like) to acquire inferred image data;
step 9, adding a model inference module, selecting an inferred picture data source, loading a model description file, automatically setting model inference parameters according to the model description file, and starting inference after loading an inference model;
step 10, checking the inference result, and selecting whether to filter the result according to certain characteristics (such as detection frame area, detection category and the like) according to needs;
and 11, transmitting the detection result to a subsequent module according to the requirement, such as an additional result display module, displaying the inference result and the like.
The present disclosure also provides a computer device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to execute the instructions of the deep learning model deployment method described in the above embodiments.
The computer device may include one or more processors, such as one or more Central Processing Units (CPUs) or Graphics Processors (GPUs), each of which may implement one or more hardware threads. The computer device may further comprise any memory for storing any kind of information, such as code, settings, data etc., in a particular embodiment a computer program on a memory and executable on a processor, which computer program, when executed by the processor, may perform the instructions of the method of any of the embodiments described above. For example, and without limitation, memory may include any one or more of the following in combination: any type of RAM, any type of ROM, flash memory devices, hard disks, optical disks, etc. More generally, any memory may use any technology to store information. Further, any memory may provide volatile or non-volatile retention of information. Further, any memory may represent fixed or removable components of the computer device. In one case, the computer device may perform any of the operations of the associated instructions when the processor executes the associated instructions, which may be stored in any memory or combination of memories. The computer device also includes one or more drive mechanisms for interacting with any memory, such as a hard disk drive mechanism, an optical disk drive mechanism, and so forth.
The present disclosure also provides a computer-readable medium that may be contained in the electronic device described in the above embodiments; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs, which when executed by an electronic device, cause the electronic device to implement the deep learning model deployment method described in the foregoing embodiments; computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium which can be used to store information that can be accessed by a computer device. As defined herein, computer readable media does not include transitory computer readable media (transitionadia) such as modulated data signals and carrier waves.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of an embodiment of the specification. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A deep learning model deployment method is characterized by comprising the following steps,
s100, model training: preprocessing a defect picture acquired from an industrial camera and training a model by using the processed defect picture;
s200, model derivation: selecting a deployment frame type, exporting a model and a model description file;
s300, model deployment: and importing an industrial picture acquired from an industrial camera, loading and reading the model description file, automatically selecting corresponding inference parameters according to the model description file, and inferring and outputting a detection result through model identification.
2. The deep learning model deployment method of claim 1, wherein the steps S100 and S200 are performed sequentially on a software platform.
3. The deep learning model deployment method of claim 1, wherein the S100 comprises the following steps:
s110, collecting a defect picture and marking the defect to generate a training data set;
s130, dividing the training data set into training data and verification data;
s140, selecting a model, training the model through a training data set, and adjusting training parameters according to a training effect;
s150, selecting a test picture, testing whether the training model meets the requirements, continuing training if the training model does not meet the requirements, and stopping training if the training model meets the requirements to finish model training.
4. The deep learning model deployment method of claim 1, wherein the S200 comprises the following steps:
s210, selecting a type of a deployment frame;
s220, selecting a model format according to the selected deployment frame type and then conducting export operation;
and S230, exporting the model and generating a corresponding model description file.
5. The deep learning model deployment method of claim 1, wherein the model description file comprises a model task type, a frame type, an inference model type, a model loading picture length and width size and a loading data arrangement format.
6. The deep learning model deployment method of claim 1, wherein the S300 comprises the following steps:
s310, shooting through an industrial camera to obtain an industrial picture;
s320, loading an industrial picture and a model description file;
s330, reading the model description file, and automatically selecting corresponding inference parameters according to the model description file;
s340, loading the derived model and selectively setting an inference parameter;
and S350, after the model output data is deduced and obtained, displaying a final detection result on the industrial picture.
7. The deep learning model deployment method of claim 3, further comprising the following steps between S110 and S130: and S120, amplifying the training data set.
8. The deep learning model deployment method of claim 4, wherein the S200 further comprises the following steps: s240, encrypting the derived model.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor executes the computer program to perform the instructions of the deep learning model deployment method of any one of claims 1 to 8.
10. A computer-readable storage medium, in which a computer program is stored, which computer program, when being executed by a processor, is adapted to carry out the instructions of the deep learning model deployment method according to any one of claims 1 to 8.
CN202210572773.1A 2022-05-25 2022-05-25 Deep learning model deployment method, equipment and medium Pending CN114663437A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210572773.1A CN114663437A (en) 2022-05-25 2022-05-25 Deep learning model deployment method, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210572773.1A CN114663437A (en) 2022-05-25 2022-05-25 Deep learning model deployment method, equipment and medium

Publications (1)

Publication Number Publication Date
CN114663437A true CN114663437A (en) 2022-06-24

Family

ID=82038458

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210572773.1A Pending CN114663437A (en) 2022-05-25 2022-05-25 Deep learning model deployment method, equipment and medium

Country Status (1)

Country Link
CN (1) CN114663437A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180348717A1 (en) * 2017-06-02 2018-12-06 Aspen Technology, Inc. Computer System And Method For Building And Deploying Predictive Inferential Models Online
CN109685160A (en) * 2019-01-18 2019-04-26 创新奇智(合肥)科技有限公司 A kind of on-time model trained and dispositions method and system automatically
CN111240656A (en) * 2020-01-16 2020-06-05 深圳市守行智能科技有限公司 Efficient deep learning model deployment framework
CN111310934A (en) * 2020-02-14 2020-06-19 北京百度网讯科技有限公司 Model generation method and device, electronic equipment and storage medium
CN111881880A (en) * 2020-08-10 2020-11-03 晶璞(上海)人工智能科技有限公司 Bill text recognition method based on novel network
CN113421235A (en) * 2021-06-17 2021-09-21 中国电子科技集团公司第四十一研究所 Cigarette positioning device and method based on deep learning
CN113822322A (en) * 2021-07-15 2021-12-21 腾讯科技(深圳)有限公司 Image processing model training method and text processing model training method
CN114092313A (en) * 2022-01-19 2022-02-25 北京华品博睿网络技术有限公司 Model reasoning acceleration method and system based on GPU (graphics processing Unit) equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180348717A1 (en) * 2017-06-02 2018-12-06 Aspen Technology, Inc. Computer System And Method For Building And Deploying Predictive Inferential Models Online
CN109685160A (en) * 2019-01-18 2019-04-26 创新奇智(合肥)科技有限公司 A kind of on-time model trained and dispositions method and system automatically
CN111240656A (en) * 2020-01-16 2020-06-05 深圳市守行智能科技有限公司 Efficient deep learning model deployment framework
CN111310934A (en) * 2020-02-14 2020-06-19 北京百度网讯科技有限公司 Model generation method and device, electronic equipment and storage medium
CN111881880A (en) * 2020-08-10 2020-11-03 晶璞(上海)人工智能科技有限公司 Bill text recognition method based on novel network
CN113421235A (en) * 2021-06-17 2021-09-21 中国电子科技集团公司第四十一研究所 Cigarette positioning device and method based on deep learning
CN113822322A (en) * 2021-07-15 2021-12-21 腾讯科技(深圳)有限公司 Image processing model training method and text processing model training method
CN114092313A (en) * 2022-01-19 2022-02-25 北京华品博睿网络技术有限公司 Model reasoning acceleration method and system based on GPU (graphics processing Unit) equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HWJ666: "《tensorflow分布式配置》", 《HTTPS://ZHUANLAN.ZHIHU.COM/P/451299247》 *
张春强等: "《机器学习软件工程方法与实现》", 31 December 2021, 机械工业出版社 *

Similar Documents

Publication Publication Date Title
US10846556B2 (en) Vehicle insurance image processing method, apparatus, server, and system
CN109542789B (en) Code coverage rate statistical method and device
CN110505498B (en) Video processing method, video playing method, video processing device, video playing device and computer readable medium
CN110675399A (en) Screen appearance flaw detection method and equipment
US11205260B2 (en) Generating synthetic defect images for new feature combinations
CN107948640B (en) Video playing test method and device, electronic equipment and storage medium
CN110378258B (en) Image-based vehicle seat information detection method and device
CN110401634A (en) A kind of web application hole detection regulation engine implementation method and terminal
CN109145981B (en) Deep learning automatic model training method and equipment
CN107368343A (en) A kind of starting up of terminal method, terminal and storage medium based on Android system
CN111290905A (en) Testing method and device for cloud platform of Internet of things
CN109102026A (en) A kind of vehicle image detection method, apparatus and system
CN114663437A (en) Deep learning model deployment method, equipment and medium
CN114981838A (en) Object detection device, object detection method, and object detection program
CN114786032B (en) Training video management method and system
CN111353330A (en) Image processing method, image processing device, electronic equipment and storage medium
KR20230004314A (en) Vision inspection system based on deep learning and vision inspection method using thereof
CN111199728A (en) Training data acquisition method and device, intelligent sound box and intelligent television
CN111242116B (en) Screen positioning method and device
CN114363699B (en) Animation file playing method and device and terminal equipment
CN104021068B (en) Terminal device method for testing performance and device
CN113935748A (en) Screening method, device, equipment and medium for sampling inspection object
CN107766216A (en) It is a kind of to be used to obtain the method and apparatus using execution information
CN110738562A (en) Method, device and equipment for generating risk reminding information
CN112370789B (en) Method and system for detecting fitness of model triangular mesh

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220624

RJ01 Rejection of invention patent application after publication