CN113792704A - Cloud deployment method and device of face recognition model - Google Patents
Cloud deployment method and device of face recognition model Download PDFInfo
- Publication number
- CN113792704A CN113792704A CN202111151732.7A CN202111151732A CN113792704A CN 113792704 A CN113792704 A CN 113792704A CN 202111151732 A CN202111151732 A CN 202111151732A CN 113792704 A CN113792704 A CN 113792704A
- Authority
- CN
- China
- Prior art keywords
- model
- interface
- face recognition
- recognition model
- picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000012805 post-processing Methods 0.000 claims abstract description 23
- 238000007781 pre-processing Methods 0.000 claims abstract description 18
- 230000006870 function Effects 0.000 claims description 17
- 238000012545 processing Methods 0.000 claims description 13
- 238000006243 chemical reaction Methods 0.000 claims description 8
- 238000013075 data extraction Methods 0.000 claims description 8
- 238000010606 normalization Methods 0.000 claims description 8
- 238000004806 packaging method and process Methods 0.000 claims description 4
- 238000001914 filtration Methods 0.000 claims description 3
- 238000004458 analytical method Methods 0.000 claims description 2
- 238000012163 sequencing technique Methods 0.000 claims description 2
- 230000008569 process Effects 0.000 abstract description 15
- 239000002699 waste material Substances 0.000 abstract description 4
- 239000000463 material Substances 0.000 abstract description 2
- 238000004891 communication Methods 0.000 description 3
- 238000013136 deep learning model Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- GPRLSGONYQIRFK-MNYXATJNSA-N triton Chemical compound [3H+] GPRLSGONYQIRFK-MNYXATJNSA-N 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000002054 transplantation Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/30—Creation or generation of source code
- G06F8/31—Programming languages or programming paradigms
- G06F8/315—Object-oriented languages
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/40—Transformation of program code
- G06F8/41—Compilation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/60—Software deployment
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a cloud deployment method of a face recognition model, which is used for solving the technical problems of complicated process and high cost of the existing model online method. The method comprises the following steps: deploying a face recognition model at a server side, and deploying the server side on a remote GPU server; opening a grpc port of the server end for a client to access the grpc; analyzing meta data of the face recognition model and configuration information of the face recognition model through grpc; the client comprises 5 interfaces including a picture preprocessing interface, a model input interface, a model output interface, a model reasoning interface and a reasoning result post-processing interface. According to the method, cloud deployment of the face recognition model is achieved, waste of idle resources is avoided, and manpower and material resource costs are saved.
Description
Technical Field
The application relates to the technical field of mobile internet of things, in particular to a cloud deployment method and device of a face recognition model.
Background
The trained model needs a large computational power in the online deployment process, and the online model is usually realized by reducing the network scale, using special hardware and the like. The network model re-transplantation is reduced, the process is complicated, certain precision is lost, uncontrollable factors are more caused by the difference of training frames in different frame transplantation processes, for example, a model trained by PyTorch needs to do more workload than the model trained by TensorFlow; the use of dedicated hardware makes the cost higher than the cost of using a GPU, and programming is required to adapt the hardware during the migration process.
Therefore, a method for implementing online deployment of a model is needed to solve the above technical problems.
Disclosure of Invention
The embodiment of the application provides a cloud deployment method and device of a face recognition model, and aims to solve the technical problems of complex process and high cost of the existing model online method.
In one aspect, an embodiment of the present application provides a cloud deployment method for a face recognition model, including: deploying a face recognition model at a server side, and deploying the server side on a remote GPU server; opening a grpc port of the server end for a client to access the grpc; analyzing meta data of the face recognition model and configuration information of the face recognition model through grpc; the client comprises 5 interfaces including a picture preprocessing interface, a model input interface, a model output interface, a model reasoning interface and a reasoning result post-processing interface.
In a possible implementation manner of the embodiment of the present application, the picture preprocessing interface is configured to perform a preset operation on an input picture; the preset operation at least comprises any one or more of color conversion operation, picture scaling operation, data normalization processing operation and picture data extraction operation.
In one possible implementation manner of the embodiment of the present application, the color conversion operation is configured to convert a picture in a BGR format into a picture in an RGB format; the picture scaling operation is used for scaling the input picture according to a preset requirement; the data normalization processing operation is used for normalizing the pixel point coordinates of the input picture to a range from 0 to 1; the picture data extraction operation is used for copying the read picture data into a preset cache variable.
In a possible implementation manner of the embodiment of the present application, the color conversion operation is implemented by using a cvColor function in Opencv; the picture scaling operation is realized by adopting a cvsize function in Opencv; the data normalization processing operation is realized by adopting a convertTo function in Opencv; the picture data extraction operation is realized by adopting a memcpy function.
In a possible implementation manner of the embodiment of the application, the model input interface is configured to create a triton-client type input as an input of a model inference interface according to configuration information of the face recognition model; the configuration information of the face recognition model at least comprises any one or more of the following items: model name, data format, length, width, and number of model channels.
In a possible implementation manner of the embodiment of the present application, the model output interface is configured to create a triton-client type output as an output of the model inference interface according to an output name of the face recognition model.
In a possible implementation manner of the embodiment of the application, the model inference interface is used for accessing the cloud inference interface through the grpc-client end and transmitting the triton-client type input, output and result callback functions.
In a possible implementation manner of the embodiment of the present application, the inference result post-processing interface is configured to perform post-processing on output data of the face recognition model to obtain a json-serialized inference result; the post-processing of the output data of the face recognition model specifically includes: acquiring an output frame and a score of the face recognition model; filtering the output frames with the scores lower than a preset threshold value, and converting the central point coordinates of the remaining output frames into structural data in a BBox format; sequencing the structural data in the BBox format, and removing the structural data in the BBox format which appears repeatedly through an NMS clustering algorithm and the preset threshold value; and packaging the residual data through a rapidjson tool to finally obtain a json serialized reasoning result.
In a possible implementation manner of the embodiment of the present application, the method further includes: compiling the client into a so dynamic link library in a linux environment; and 5 interfaces of the picture preprocessing interface, the model input interface, the model output interface, the model reasoning interface and the reasoning result post-processing interface are packaged into 3C interfaces of the client initialization interface, the model reasoning interface and the resource release interface.
On the other hand, this application embodiment still provides a high in the clouds deployment device of face identification model, includes: the deployment module is used for deploying the face recognition model at a server side, and deploying the server side on a remote GPU server; the open module opens a grpc port of the server end for a client to access the grpc; the analysis module is used for analyzing meta data of the face recognition model and configuration information of the face recognition model through grpc; the client comprises 5 interfaces including a picture preprocessing interface, a model input interface, a model output interface, a model reasoning interface and a reasoning result post-processing interface.
According to the cloud deployment method and device for the face recognition model, the face recognition model is deployed in a cloud deployment mode, and the client side and the inference server side are in a grpc communication mode and can support inference output of pictures in batches; the client comprises 5 interfaces of input picture preprocessing, model input, model output, model reasoning and post processing, C + + is compiled into a dynamic link library, and 5 interfaces of the client are packaged into 3C interfaces for multi-language cross-platform calling, so that the client based on the C + + language is more suitable for embedded equipment such as robots and face check-in. In addition, the C/S end framework is deployed at the cloud based on the network, only a small number of high-performance computing servers need to be deployed at the cloud, a large amount of cheap hardware and equipment can be used for computing reasoning by using the face recognition model, the comprehensive cost is low, waste of idle resources is avoided, and the cost of manpower and material resources is further saved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a flowchart of a cloud deployment method of a face recognition model according to an embodiment of the present application;
fig. 2 is a block diagram of a C/S call flow provided in an embodiment of the present application;
fig. 3 is a schematic view of an internal structure of a cloud deployment device of a face recognition model according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
After training and testing a deep learning model, if we intend to bring the algorithm model online and put it into production environment deployment, we need to do some extra processing work. Because the deep learning model has a large demand for computing power, in the online process, three modes of reducing the network scale, using special hardware and performing cloud computing through C/S architecture networking are generally adopted. Due to the difference of different training frames and the compatibility problem of hardware equipment, the process of reducing the network scale or simplifying the network model is complicated and uncontrollable, and a series of problems are often introduced for solving one problem; although the dedicated hardware is faster, the cost is too high to be cost effective and also requires programming of the adaptation hardware.
The 5G era has come, the IPv6 protocol is deployed on a large scale, and the trend of everything, namely the interconnection, especially the wireless mobile Internet, is great as an important infrastructure. The model is deployed at a server side, the client side sends input data to the server side through network transmission, and the input data is transmitted to the client side after a result is calculated. This deployment method can obtain the calculation result of the deep learning model at a higher speed without losing accuracy even in the lowest-end hardware environment.
The embodiment of the application provides a cloud deployment method and device for a face recognition model, a trained face detection model is deployed on a high-performance server in a cloud deployment mode, a client does not need expensive equipment and is connected with a network, AI face recognition can be achieved, the implementation process is simple and clear, the online cost of the model is greatly reduced, and the technical problem is solved.
The technical solutions proposed in the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a cloud deployment method of a face recognition model according to an embodiment of the present application. As shown in fig. 1, the cloud deployment method provided in the embodiment of the present application mainly includes the following steps:
And step 102, opening a grpc port of the server end for a client to access the grpc.
And 103, analyzing meta data of the face recognition model and configuration information of the face recognition model through grpc.
In one or more embodiments of the present application, the client includes 5 interfaces, namely, a picture preprocessing interface, a model input interface, a model output interface, a model inference interface, and an inference result post-processing interface.
The embodiment of the application provides a cloud deployment method of a face recognition model. Firstly, an Nvidia triton deployment architecture is adopted, data transmission of a C/S architecture is realized through grpc communication, wherein model service processing is realized at a client side, namely a client side, and the model service processing comprises 5 interfaces of input picture preprocessing, model input, model output, model reasoning and reasoning result postprocessing; compiling the client end realized by C + + into a dynamic link library and packaging 5 interfaces into 3C interfaces for cross-platform calling of other languages so as to realize richer front-end functions.
Specifically, the triton-server end is deployed on a remote GPU server, and a grpc port of the server end is opened for a client end to access the grpc. It should be noted that the client end is implemented on an arm device or an x86 device, and the client end includes a grpc-client end. And then analyzing meta data of the face recognition model and configuration information of the face recognition model through grpc.
In one or more implementation manners of the embodiment of the application, the picture preprocessing interface performs 4 operations of color conversion, picture scaling, data normalization processing and picture data extraction according to the input requirements of the cloud reasoning service. The color conversion operation is realized by adopting a cvColor function in Opencv and is used for converting the BGR format picture into an RGB format; the picture scaling operation is realized by adopting a cvsize function in Opencv and is used for scaling the picture according to the picture size required by the cloud end; the data normalization processing operation is realized by using a convertTo function in Opencv and is used for normalizing the pixel point coordinate to a range from 0 to 1; the picture data extraction operation is realized by a memcpy function, and is used for copying the read input picture data into a pre-stored cache variable.
Further, the model input interface creates triton-client type input as the input of the model inference interface according to the name, data format, length, width and model channel number of the analyzed face recognition model; and the number of the first and second groups,
and the model output interface creates triton-client type output according to the output name of the face recognition model and uses the triton-client type output as the output of the model inference interface.
Furthermore, the model reasoning interface accesses the cloud reasoning interface through the grpc-client end and transmits triton-client type input, output and result callback functions.
In one or more implementation manners of the embodiment of the application, the inference result post-processing interface is configured to perform post-processing on the output data of the face recognition model to obtain a json-serialized inference result. Specifically, obtaining the coordinates and the scores of the output frame of the face recognition model according to the output names output _ bbox/BiasAdd and output _ cov/Sigmoid of the face recognition model; firstly, filtering an output frame with a lower score according to a set threshold value, and converting a relative coordinate into an absolute coordinate; then, converting the central point coordinates of the remaining output frames into structured data in a BBox format; sorting the BBoxes according to the scores, and finally, removing repeated BBoxes by using an NMS clustering algorithm according to the calculated IOU and a set threshold value; and packaging the post-processed data into serialized json data return by using a third-party tool rapidjson.
In addition, in one or more implementation manners of the embodiment of the application, the client terminal is compiled into the so dynamic link library in a linux environment, and 5 interfaces, namely the picture preprocessing interface, the model input interface, the model output interface, the model inference interface and the inference result post-processing interface, are packaged into 3C interfaces, namely a client initialization interface, a model inference interface and a resource release interface, so as to be called by other languages;
the deployment method proposed by the embodiment of the present application is further described below with reference to the drawings by taking a face recognition model as an example.
Fig. 2 is a block diagram of a C/S call flow provided in an embodiment of the present application. As shown in figure 2 of the drawings, in which,
firstly, deploying a face detection model at a triton server end on 10.180.150.60 equipment, and providing an 8401 port for communication between a grpc-server end and a grpc-client end;
then, calling a C interface by using a go language or directly calling an initialization interface by using C + + to transfer an absolute path of a picture to be detected, a triton server address (10.180.150.60: 8201 is used in the embodiment of the application), and a face detection model name facedetect _ edge; to obtain meta data and configuration information of the model;
secondly, directly calling an interface getInfer by using a go language or C + +, preprocessing a picture, establishing model input and output, requesting a tritonserver end to acquire an inference result through a grpc, and finally performing post-processing on data to obtain a json serialized inference result;
and finally, calling a resource release interface to complete the release of the program resources.
Based on the same inventive concept, the embodiment of the present application further provides a cloud deployment device of a face recognition model, and an internal structure of the cloud deployment device is shown in fig. 3.
Fig. 3 is a schematic view of an internal structure of a cloud deployment device of a face recognition model according to an embodiment of the present application. As shown in fig. 3, the apparatus includes:
the deployment module 301 is configured to deploy the face recognition model at a server side, and deploy the server side on a remote GPU server;
an opening module 302, configured to open a grpc port of the server for a client to access a grpc;
the parsing module 303 is configured to parse meta data of the face recognition model and configuration information of the face recognition model through grpc; the client comprises 5 interfaces including a picture preprocessing interface, a model input interface, a model output interface, a model reasoning interface and a reasoning result post-processing interface.
According to the cloud deployment method and device for the face recognition model, the trained face recognition model is deployed on the high-performance server in a cloud deployment mode, and the client side does not need expensive equipment and is connected with a network, so that AI face recognition can be achieved. Therefore, only a small number of high-performance servers need to be deployed at the cloud, and a large number of cheap hardware and equipment can use the model reasoning service, so that the cost is saved, and the waste of idle resources is avoided; the C/S end supports batch processing of pictures in a grpc service mode, the client end, namely the client end, is divided into 5 interfaces of picture preprocessing, face model input, face model output, model reasoning and reasoning result postprocessing according to a model reasoning process, the implementation is simple and clear, the client end is compiled by C + + and is more suitable for running on embedded equipment, and a dynamic link library compiled by the C + + language can be called in cross-platform multiple languages.
In addition, the beneficial effect of this application embodiment mainly lies in:
1. the cloud deployment mode is adopted, so that the cost is saved, and the waste of idle resources is avoided.
And 2, the client end is divided into 5 interfaces of face input picture preprocessing, face model input, face model output, model inference and post-processing according to the inference process, the process is clear and easy to understand, and the later maintenance cost is low.
And 3, the client uses C + + language to compile and generate a dynamic link library, and packages the dynamic link library into 3C interfaces for client initialization, model reasoning and resource release to support multi-language cross-platform calling, and realizes richer front-end functions by using a reasoning result.
4. The grpc interaction mode is more suitable for processing image data or video streams in batches, and the implementation based on C + + is more suitable for embedded devices (such as robots and human face check-in devices) with high performance requirements.
The embodiments in the present application are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.
Claims (10)
1. A cloud deployment method of a face recognition model is characterized by comprising the following steps:
deploying a face recognition model at a server side, and deploying the server side on a remote GPU server;
opening a grpc port of the server end for a client to access the grpc;
analyzing meta data of the face recognition model and configuration information of the face recognition model through grpc;
the client comprises 5 interfaces including a picture preprocessing interface, a model input interface, a model output interface, a model reasoning interface and a reasoning result post-processing interface.
2. The cloud deployment method of the face recognition model according to claim 1, wherein the picture preprocessing interface is configured to perform a preset operation on an input picture;
the preset operation at least comprises any one or more of color conversion operation, picture scaling operation, data normalization processing operation and picture data extraction operation.
3. The cloud deployment method of the face recognition model according to claim 2,
the color conversion operation is used for converting the picture in the BGR format into the picture in the RGB format;
the picture scaling operation is used for scaling the input picture according to a preset requirement;
the data normalization processing operation is used for normalizing the pixel point coordinates of the input picture to a range from 0 to 1;
the picture data extraction operation is used for copying the read picture data into a preset cache variable.
4. The cloud deployment method of the face recognition model according to claim 3,
the color conversion operation is realized by adopting a cvColor function in Opencv;
the picture scaling operation is realized by adopting a cvsize function in Opencv;
the data normalization processing operation is realized by adopting a convertTo function in Opencv;
the picture data extraction operation is realized by adopting a memcpy function.
5. The cloud deployment method for the face recognition model according to claim 1, wherein the model input interface is configured to create a triton-client type input as an input of a model inference interface according to configuration information of the face recognition model;
the configuration information of the face recognition model at least comprises any one or more of the following items: model name, data format, length, width, and number of model channels.
6. The cloud deployment method for the face recognition model according to claim 1, wherein the model output interface is configured to create a triton-client type output as an output of a model inference interface according to an output name of the face recognition model.
7. The cloud deployment method of the face recognition model of claim 1, wherein the model inference interface is used for accessing the cloud inference interface through a grpc-client end to introduce triton-client type input, output and result callback functions.
8. The cloud deployment method of the face recognition model according to claim 1, wherein the inference result post-processing interface is configured to perform post-processing on output data of the face recognition model to obtain a json-serialized inference result;
the post-processing of the output data of the face recognition model specifically includes:
acquiring an output frame and a score of the face recognition model;
filtering the output frames with the scores lower than a preset threshold value, and converting the central point coordinates of the remaining output frames into structural data in a BBox format;
sequencing the structural data in the BBox format, and removing the structural data in the BBox format which appears repeatedly through an NMS clustering algorithm and the preset threshold value;
and packaging the residual data through a rapidjson tool to finally obtain a json serialized reasoning result.
9. The cloud deployment method for the face recognition model according to claim 1, wherein the method further comprises:
compiling the client into a so dynamic link library in a linux environment;
and 5 interfaces of the picture preprocessing interface, the model input interface, the model output interface, the model reasoning interface and the reasoning result post-processing interface are packaged into 3C interfaces of the client initialization interface, the model reasoning interface and the resource release interface.
10. A cloud deployment device for a face recognition model, the device comprising:
the deployment module is used for deploying the face recognition model at a server side, and deploying the server side on a remote GPU server;
the open module opens a grpc port of the server end for a client to access the grpc;
the analysis module is used for analyzing meta data of the face recognition model and configuration information of the face recognition model through grpc;
the client comprises 5 interfaces including a picture preprocessing interface, a model input interface, a model output interface, a model reasoning interface and a reasoning result post-processing interface.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111151732.7A CN113792704B (en) | 2021-09-29 | 2021-09-29 | Cloud deployment method and device of face recognition model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111151732.7A CN113792704B (en) | 2021-09-29 | 2021-09-29 | Cloud deployment method and device of face recognition model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113792704A true CN113792704A (en) | 2021-12-14 |
CN113792704B CN113792704B (en) | 2024-02-02 |
Family
ID=78877539
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111151732.7A Active CN113792704B (en) | 2021-09-29 | 2021-09-29 | Cloud deployment method and device of face recognition model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113792704B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114553907A (en) * | 2021-12-27 | 2022-05-27 | 山东新一代信息产业技术研究院有限公司 | Garbage bin overflow detecting system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111488197A (en) * | 2020-04-14 | 2020-08-04 | 浙江新再灵科技股份有限公司 | Deep learning model deployment method and system based on cloud server |
CN112464890A (en) * | 2020-12-14 | 2021-03-09 | 招商局金融科技有限公司 | Face recognition control method, device, equipment and storage medium |
WO2021051611A1 (en) * | 2019-09-19 | 2021-03-25 | 平安科技(深圳)有限公司 | Face visibility-based face recognition method, system, device, and storage medium |
-
2021
- 2021-09-29 CN CN202111151732.7A patent/CN113792704B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021051611A1 (en) * | 2019-09-19 | 2021-03-25 | 平安科技(深圳)有限公司 | Face visibility-based face recognition method, system, device, and storage medium |
CN111488197A (en) * | 2020-04-14 | 2020-08-04 | 浙江新再灵科技股份有限公司 | Deep learning model deployment method and system based on cloud server |
CN112464890A (en) * | 2020-12-14 | 2021-03-09 | 招商局金融科技有限公司 | Face recognition control method, device, equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
侯瑞发;杨雄;陈伟达;邓泽霖;胡世亮;: "基于人脸识别技术的课堂考勤系统", 网络安全技术与应用, no. 06 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114553907A (en) * | 2021-12-27 | 2022-05-27 | 山东新一代信息产业技术研究院有限公司 | Garbage bin overflow detecting system |
Also Published As
Publication number | Publication date |
---|---|
CN113792704B (en) | 2024-02-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3731161A1 (en) | Model application method and system, and model management method and server | |
CN106569824B (en) | Method and apparatus, the method and apparatus of page rendering of page data compiling | |
CN109995601B (en) | Network traffic identification method and device | |
CN109429522A (en) | Voice interactive method, apparatus and system | |
CN110765740B (en) | Full-type text replacement method, system, device and storage medium based on DOM tree | |
CN113159091A (en) | Data processing method and device, electronic equipment and storage medium | |
CN114494815B (en) | Neural network training method, target detection method, device, equipment and medium | |
CN113792704B (en) | Cloud deployment method and device of face recognition model | |
CN108833389A (en) | A kind of shared processing method and processing device of information data | |
CN110765973A (en) | Account type identification method and device | |
CN114332590B (en) | Joint perception model training method, joint perception method, device, equipment and medium | |
CN109492749A (en) | The method and device of neural network model online service is realized in a local network | |
CN113887442A (en) | OCR training data generation method, device, equipment and medium | |
CN117636874A (en) | Robot dialogue method, system, robot and storage medium | |
US20200286012A1 (en) | Model application method, management method, system and server | |
CN116643814A (en) | Model library construction method, model calling method based on model library and related equipment | |
CN114003208B (en) | System internationalization configuration method, device, equipment and storage medium | |
CN113505844A (en) | Label generation method, device, equipment, storage medium and program product | |
CN114648110A (en) | Model training method and device, electronic equipment and computer storage medium | |
CN111209376A (en) | AI digital robot operation method | |
CN117408679B (en) | Operation and maintenance scene information processing method and device | |
CN113553489B (en) | Method, device, equipment, medium and program product for capturing content | |
CN113783960B (en) | Intelligent substation equipment data processing method and related equipment | |
CN107273364A (en) | A kind of voice translation method and device | |
CN110837896B (en) | Storage and calling method and device of machine learning model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |