Based on the distributed view of virtualized environment as recognition methods
Technical field
The present invention relates to technical field of virtualization, know more particularly to a kind of distributed view picture based on virtualized environment
Other method.
Background technique
Universal with mobile Internet and 4G network, the media information that people touch daily is more and more numerous and more jumbled, by
The even many people of mobile Internet can become the producer of media information.Therefore, the security control of new media content is become
At a urgent task, internet visible image data very large for scale, manual examination and verification are not obviously able to satisfy
Demand.With the fast development of GPU operation and deep neural network for image recognition technology, pass through machine learning training
The scheme for carrying out visible image identification shows more and more important practical value in very more fields.
The deep learning frame of current several relatively mainstreams, such as ensorflow, mxnet, caffe, each frame is to ring
Border dependence, network parameter debugging, model training, model capability instantiation have oneself process and mutually it is incompatible.
When not virtualized, will pre-process if necessary, the frames such as caffe, mxnet, tensorflow
Model instantiated on the same server, that must be complete by the condition depended of these frames on this server
Portion successively installs, and needs to solve complex environment and relies on conflict, if the environment of current server is moved to other one
Platform server, that just must all re-start condition depended installation, this can be one under large-scale cluster server conditions
Part very hard work;
During projects, in order to multiple deep learning frames compatible in a set of platform, model is quickly carried out
The instantiation of trained and model capability becomes very stubborn problem, for GPU resource virtualization also without more mature scheme, because
This is badly in need of a unified platform specification to solve the environmental warfare of more frames, and model training, model capability instantiation is allowed to become
It is simpler, and each model of manage and dispatch uses the distribution of GPU resource on multiple server nodes.
Summary of the invention
It is an object of the invention to overcome the deficiencies of the prior art and provide a kind of distributed views based on virtualized environment
As recognition methods is reached in a platform environment by the integration to multiple deep learning frames to multiple deep learning frames
The quick instantiation of frame model capability, while virtual management is carried out by GPU resource of the platform mechanism to multiserver node
Functional module each in project independent mirror image container of opposing is instantiated by virtualization scheme, and then reached by allotment
Project deployment and the foundation of clustering functionality are quickly carried out on single machine or multimachine.
The purpose of the present invention is achieved through the following technical solutions: the distributed view picture based on virtualized environment is known
Other method, comprising the following steps:
S1: visible image data are pre-processed, and obtain target view as data;
S2: identification target view carries out image classification and target detection, carries out frame docking, mould with unified process as data
The instantiation of type ability, process security operations;
S3: multiserver node, the deployment of more identification containers and GPU resource distribution management are carried out by internal web;
S4: production mirror image corresponding with image classification function, target detection function classification, and pass through engineering script custom-built system
Resource allocation function, mirror image load function, container instance function and container internal services self-starting function.
The step 1 includes following sub-step:
S101: the configuration file when platform server node is read from Cfg. ini by MT, then according to the need of configuration file
It asks and successively drives related service;
S102: basic data stream is provided for service management framework basic document by PRE_BASE and is supported, and realizes and feeds back with MT
Communication;
S103: on the basis of the PRE_BASE, formulate and realize that picture pre-processes, at video preprocessor by PreInstance
The primitive rule of reason.
The step 1 further includes the transcoding work of visible image data, using GPU hardware accelerated mode by visible image data
Switch to the visible image data of object format.
PreInstance frame is libkt_if.so in the step S103, and the libkt_if.so is according between the time
Every and key frame carry out pumping frame, and frame data are saved in local disk or memcache server.
The step S2 includes following sub-step:
S201: the configuration file when platform server node is read from Cfg. ini by MT, then according to the need of configuration file
It asks and successively drives related service;
S202: providing basic data stream by RP_BASE for service management framework basic document and support, and realizes that feedback is logical with MT
Letter;
S203: on the basis of the RP_BASE, by DarknetInstance, CaffeInstace,
Caffe2Instance carries out corresponding network load, picture parsing, picture recognition process.
The beneficial effects of the present invention are:
1) it is integrated by the deep learning frame to current main-stream, reaches the rapid deployment of a variety of frame model recognition capabilities
And instantiation, virtualization and manage and dispatch are carried out based on GPU resource of the customized development of platform to GPU multimachine assembly, together
When be based on virtualization scheme, can will pretreatment, model recognition capability be packaged into independent container, can be in single machine or multimachine feelings
It is rapidly performed by deployment under condition, realizes the purpose of multiple deep learning frame image recognition integration and distributed arithmetic, effectively
The overall calculation time has been saved, computational efficiency is substantially increased.
2) multiserver node, the deployment of more identification containers and GPU resource distribution management are carried out by internal web, borrowed
Virtual management allotment is carried out by GPU resource of the platform mechanism to multiserver node, accomplishes the cooperative scheduling of multimachine GPU resource
Management.
3) by the utilization to virtualization technology, related service and condition depended can be all isolated, for more
Kind deep learning frame (it is more complicated relying on environment, dispose and its be easy to cause environmental warfare) can be integrated quickly, reach
It, can to the flexible deployment on single machine or multimachine and application quickly instantiation, while by mature distributed type assemblies Managed Solution
To accomplish large-scale mirror image container application layout management to multi node server.
Detailed description of the invention
Fig. 1 is server cluster distribution schematic diagram;
Fig. 2 is the schematic diagram for pre-processing visible image data procedures;
Fig. 3 is the schematic diagram for identifying visible image data procedures;
Fig. 4 is the flow diagram of virtualization scheme.
Specific embodiment
Below in conjunction with embodiment, technical solution of the present invention is clearly and completely described, it is clear that described
Embodiment is only a part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, this field
Technical staff's every other embodiment obtained under the premise of not making the creative labor belongs to what the present invention protected
Range.
Refering to fig. 1-4, the present invention provides a kind of technical solution: the distributed view picture identification side based on virtualized environment
Method, comprising the following steps:
S1: visible image data are pre-processed, and obtain target view as data;
As shown in Fig. 2, the step 1 includes following sub-step:
S101: the configuration file when platform server node is read from Cfg. ini by MT, then according to the need of configuration file
It asks and successively drives related service;
S102: basic data stream is provided for service management framework basic document by PRE_BASE and is supported, and realizes and feeds back with MT
Communication;
S103: on the basis of the PRE_BASE, formulate and realize that picture pre-processes, at video preprocessor by PreInstance
The primitive rule of reason, i.e. PreInstance are the specific services that the associated frame members based on PRE_BASE parent are realized.
PreInstance frame is libkt_if.so in the step S103, and the libkt_if.so is according between the time
Every and key frame carry out pumping frame, and frame data are saved in local disk or memcache server, libkt_if.so is
Base library is pre-processed, python converting interface is its python interposer.
The step 1 further includes the transcoding work of visible image data, using GPU(graphics processor) hardware-accelerated mode
The visible image data that visible image data are switched to object format substitute software algorithm using hardware to make full use of hardware to consolidate
Some rapid charaters improve computational efficiency, further include other bottom libraries further, the bottom library be Cuda.so,
opencv.so、Ffmpeg.so。
S2: identification target view carries out image classification and target detection, carries out frame pair with unified process as data
It connects, model capability instantiation, process security operations, and then has unified platform specification and carry out mainstream deep learning frame
Frame docking and model capability instantiation, unified outbound data input and output, unified process safety protecting mechanism, unified money
Source control and allotment mechanism are reached in a platform environment by the integration to multiple deep learning frames to multiple depth
The quick instantiation of learning framework model capability;
As shown in figure 3, the step S2 includes following sub-step:
S201: the configuration file when platform server node is read from Cfg. ini by MT, then according to the need of configuration file
It asks and successively drives related service;
S202: providing basic data stream by RP_BASE for service management framework basic document and support, and realizes that feedback is logical with MT
Letter;
S203: on the basis of the RP_BASE, by DarknetInstance, CaffeInstace,
Caffe2Instance carries out corresponding network load, picture parsing, picture recognition process.
Cfg. Ini, MT, RP_BASE, DarknetInstance, CaffeInstace, Caffe2Instance group
At basic platform services external member.
Further, deep learning frame foundation library includes Darknet.s, Caffe.so, Caffe2.so, and python turns
Connecing layer is its python converting interface, and GPU and other bottom libraries include Cuda.so, opencv.so etc..
As shown in Figure 1 and Figure 4, multiserver node, the deployment of more identification containers and GPU S3: are carried out by internal web
Business datum to be treated is stored in inside the file of server, then carries out bulk management by resource allocation management, by
Platform mechanism carries out virtual management allotment to the GPU resource of multiserver node, accomplishes the cooperative scheduling pipe of multimachine GPU resource
Reason;
S4: production mirror image corresponding with function classification, and the resource allocation function by being engineered script custom-built system, mirror image add
Carry function, container instance function and container internal services self-starting function.
By the virtualization scheme, functional module each in project can be opposed to independent mirror image container to carry out example
Change, and then reach and quickly carry out project deployment and the foundation of clustering functionality on single machine or multimachine, by each deep learning frame,
The modules such as pretreatment all stand alone as unique mirror image, and quick container instance then can be carried out on multiple server nodes
Change, the simpler safety that deployment management becomes.
In addition, related service and condition depended can be all isolated by the utilization to virtualization technology, for
A variety of deep learning frames (it is more complicated relying on environment, dispose and its be easy to cause environmental warfare) can quickly be integrated,
Reach on single machine or multimachine flexible deployment and application quickly instantiation, while by mature distributed type assemblies Managed Solution,
It can accomplish large-scale mirror image container application layout management to multi node server.
The present invention is integrated by the deep learning frame to current main-stream, reaches a variety of frame model recognition capabilities
Rapid deployment and instantiation are virtualized and are managed based on GPU resource of the customized development of platform to GPU multimachine assembly
Scheduling, while it being based on virtualization scheme, pretreatment, model recognition capability can be packaged into independent container, it can be in single machine
Or deployment is rapidly performed by the case of multimachine, realize the mesh of multiple deep learning frame image recognition integration and distributed arithmetic
, it is effectively saved the overall calculation time, substantially increases computational efficiency.
The above is only a preferred embodiment of the present invention, it should be understood that the present invention is not limited to described herein
Form should not be regarded as an exclusion of other examples, and can be used for other combinations, modifications, and environments, and can be at this
In the text contemplated scope, modifications can be made through the above teachings or related fields of technology or knowledge.And those skilled in the art institute into
Capable modifications and changes do not depart from the spirit and scope of the present invention, then all should be in the protection scope of appended claims of the present invention
It is interior.