Detailed Description
Exemplary embodiments of the present disclosure will be described hereinafter with reference to the accompanying drawings. In the interest of clarity and conciseness, not all features of an actual implementation are described in the specification. It will of course be appreciated that in the development of any such actual embodiment, numerous implementation-specific decisions may be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which will vary from one implementation to another.
For the convenience of clearly describing the technical solutions of the embodiments of the present invention, in the embodiments of the present invention, the words "first", "second", and the like are used to distinguish the same items or similar items with basically the same functions or actions, and those skilled in the art can understand that the words "first", "second", and the like do not limit the quantity and execution order.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The term "comprises/comprising" when used herein refers to the presence of a feature, element or component, but does not preclude the presence or addition of one or more other features, elements or components.
The target detection scene realized by the embodiment of the invention mainly comprises liquid abnormity, gas abnormity, open fire abnormity, dust abnormity and the like. Wherein: fluid anomalies include, but are not limited to: an acid-base solution (a jet of water), water (a jet of water or shower head water spray), edible oil (yellowish oil), engine oil (blackish oil) or coal-water slurry (solid-liquid mixture); vapor anomalies include, but are not limited to: water vapor (humidifier-simulated steam) or smoke (smoke from paper combustion); abnormal open fire: open flame (simulation of a lighter) or electric spark (simulation of a short circuit); dust anomalies include, but are not limited to: bleed dust (white-off dust) or funnel coal ash (black-off/dark gray dust).
In the prior art, when identifying a running and leaking target, an R-CNN extracts a candidate frame from an image by using a selective search algorithm, inputs a normalized candidate region into a Convolutional Neural Network (CNN), extracts features, classifies the regions by using a Support Vector Machine (SVM), and finely adjusts the position and size of the candidate frame by using linear regression to obtain closer candidate frame coordinates. However, R-CNN is time consuming because these candidate boxes require CNN operations, which are computationally expensive, and many of them are duplicated.
Fast R-CNN selects corresponding regions in the characteristic map of R-CNN to obtain CNN abstract characteristics of each region; the primary abstract features in each region are then typically merged using a max pooling operation. Specifically, Fast R-CNN jointly trains a convolutional neural network, a classifier, and a bounding box regression model in a model, and in order to detect the position of an object in an image, the first step is to generate a series of random multi-scale bounding boxes or regions of interest to be tested. However, in Fast R-CNN, these regions are created by a selective search method, which is a rather slow process.
The fast R-CNN adds a full convolution Network (called as full convolution Network in English hereinafter: FCN) on the top of CNN characteristic, creates a Region suggestion Network (called as Region pro-social Network in English hereinafter: RPN), the Region suggestion Network slides a window on CNN characteristic map in turn, at each window position, the Network outputs a score and a bounding box on each anchor point, only transfers each bounding box which may be a target object to the fast R-CNN, so as to realize object classification and shrink bounding boxes.
The R-FCN, Fast R-CNN and Mask R-CNN all belong to target detection based on candidate areas, and are characterized by high detection precision, but the speed cannot meet the application real-time requirement.
In addition, in the actual scene of the leakage, when the high-pressure gas and the liquid leak, the direction and the size of the jet are random, and along with the expansion of the leak, the direction of the high-pressure gas and the liquid jet may be changed continuously (for example, the jet beam is increased), the jet size may be expanded step by step, and the corresponding target to be detected for the leakage is changed dynamically in the image. The target detection method related to the above cannot adapt to the dynamic change, so that the detection of the boundary frame is inaccurate, and the boundary frame often has an area which does not belong to the target to be detected, so that the detection precision of the target to be detected is reduced, the judgment of the severity of the leakage fault in the production process by a user is influenced, and the alarm is inaccurate.
In order to solve the above technical problem, an embodiment of the present invention provides a method for training a fully-connected classification network for detecting a leakage, overflow and leakage, as shown in fig. 1, the method includes:
101. network parameters are determined by migration learning using the sample images.
Among them, the above-mentioned migration learning is one of machine learning techniques, and a model trained on one task is reused on another related task. According to the scheme, the network parameters are trained on the basic data set, so that huge resources which need to be consumed by a deep learning model can be avoided, the training time is saved, and the efficiency of the training model is improved.
Illustratively, the step 101 may be specifically implemented by: and (3) pre-training the convolutional neural network and the regional suggestion network by using a data set, and determining network parameters by adopting a transfer learning method. Preferably, the data set is: ImageNet, but is not limited thereto. The convolutional neural network described above may be any CNN model, such as: ResNet, Inception, or VGG.
Optionally, before step 101, the method further includes:
101a, shooting a sample image containing an object to be detected through a camera.
Illustratively, the sample image described above is an image containing various types of leakage. Thus, when training is started, various types of leakage images are included, and the training results can identify various types of leakage targets, so that when two types of leakage exist, the two types of leakage can be identified through the content of the invention.
102. And initializing the convolutional neural network and the area suggestion network according to the network parameters.
Illustratively, the network parameters refer to weights trained on a data set, the convolutional neural network needs to be trained by data, information is obtained from the data, the data is further converted into corresponding weights, and then the weights are migrated into the convolutional neural network and the regional suggestion network, that is, feature learning is automatically performed by a method combining deep learning and migration learning. Therefore, the convolutional neural network and the area suggestion network are trained by means of a transfer learning mechanism, and a network model with generalization capability is obtained for identifying the jetting position and direction of high-pressure gas or liquid.
103. And training the convolutional neural network according to the regional suggestion network.
Preferably, the step 103 specifically includes the following steps:
103a1, generating a first training area using the area suggestion network.
103a2, training the convolutional neural network based on the first training region.
104. And optimizing the area suggestion network according to the trained convolutional neural network.
Illustratively, the step 104 specifically includes the following steps:
104a1, training the area proposal network according to the trained convolutional neural network.
104a2, generating a second training area using the trained area suggestion network.
104a3, optimizing the area proposal network according to the trained area proposal network and the second training area.
Preferably, the step 104a1 can be implemented by: initializing the area suggestion network by using the trained convolutional neural network, keeping the convolutional layer unchanged, and only adjusting the part of the trained convolutional neural network, which is different from the area suggestion network. The step 104a3 can be specifically realized by the following steps: and keeping the convolutional layer parameters of the trained area suggestion network unchanged, and adjusting the trained area suggestion network again by using the second training area to obtain the optimized area suggestion network. For example, the step 104a1 may refer to the following when implementing: a third training region is generated using the trained convolutional neural network, and then the region suggestion network is trained based on the third training region. In the above-mentioned steps 103 and 104, the convolutional neural network is trained by the regional suggestion network, and then the regional suggestion network is optimized by the trained convolutional neural network, because the candidate bounding box of the sample image is extracted by the optimized regional suggestion network, the extracted candidate bounding box of the sample image is more reasonable, and can adapt to the dynamic change in the sample image, thereby improving the detection accuracy.
105. And training the fully-connected classification network according to the sample characteristic graph of the sample image and the candidate bounding box of the sample image.
Wherein the candidate bounding box of the sample image is extracted through the optimized region suggestion network. Optionally, before the step 105, the method further includes:
and 105a, extracting candidate bounding boxes of the sample image according to the optimized area suggestion network.
Illustratively, the step 105a specifically includes:
105a1, learning the characteristics of the sample image by using the trained convolutional neural network to obtain a sample characteristic diagram.
105a2, convolving the convolution kernel of the optimized area suggestion network with the sample feature map, and determining the candidate bounding box of the sample image.
Assuming that the size of a given input image is M × N, the size of a sample feature map obtained through the convolution operation is M × N, and there are L feature maps in total, where M >0 and N > 0. Wherein the convolutional neural network comprises a plurality of convolutional layers.
For example, the optimized region proposed network RPN in step 104 is convolved on the sample profile of m × n using a convolution kernel of 3 × 3, i.e., a3 × 3 sliding window. The central point position of the convolution kernel of 3 x 3 is mapped into an original image, and 3 kinds of aspect ratio ratios (1:1, 1:2, 2:1), 3 kinds of scale scales (64, 128, 256) and 9 anchor points (English abbreviation: anchor) in total, namely a candidate boundary frame, are extracted from the central position.
Optionally, after the step 105a, the method further includes:
105b, inputting the candidate bounding box of the sample image processed by the region pooling layer into a full-connection classification network, and outputting the target classification and the bounding box position.
105c, adjusting the position of the boundary frame according to the coordinate correction amount.
The correction amount is a parameter for correcting the position of the bounding box.
And 105d, screening the candidate bounding boxes of the adjusted sample image according to a deviation sample filtering method, and determining a new candidate bounding box of the sample image.
The candidate bounding box that passes through the region of interest pooling layer ROI is input into the fully-connected layer, then the classified layer and the bounding box regression layer in step 105b described above. The classification layer performs final target classification, that is, determines whether the sample candidate frame is a target. The bounding box regression layer is used for target boundary regression correction, namely, the abscissa x and the ordinate y of the top left corner vertex of a candidate bounding box corresponding to the mapping of the central position of a prediction convolution kernel in an original image, and the width w and the height h of the candidate bounding box. Where there is a regression of 4 x 9 positions for each central position. In actual calculation, the bounding box regression used the smoothed L1-loss (norm).
In step 105d, a candidate bounding box with a sample deviation greater than a threshold is selected as a new candidate bounding box by a deviation sample filtering method. Non-maxima suppression is also employed to remove candidate bounding boxes with too large an overlap region (i.e., degree of overlap) before bias sample filtering methods are used.
The convolutional neural network is combined with a bias sample filtering method, so that the situation that the training result is deteriorated due to the fact that the number of positive and negative samples in the convolutional neural network is not uniform is effectively reduced.
Further optionally, the method further includes:
and 105e, updating the fully-connected classification network according to the sample feature map of the sample image and the new candidate bounding box of the sample image.
105f, circularly executing the following steps until the coordinate correction amount is a preset value: and inputting the candidate bounding box of the sample image processed by the regional pooling layer into a full-connection classification network, outputting the target classification, and outputting the position of the bounding box.
For example, the preset value is an empirical value, which can be set according to the actual picture condition, and when the correction amount is equal to the preset value, the corresponding network model of the fully-connected classification network is the most accurate.
Illustratively, the steps 105e and 105f are specifically realized by: and taking the sample characteristic graph of the sample image and a new candidate bounding box as a new training sample, performing ROI (region of interest) processing on the sample characteristic graph and the new candidate bounding box, inputting the sample characteristic graph and the new candidate bounding box into a classification network of a full-connection layer, performing target classification and target position adjustment again, and updating parameters of the full-connection classification network through repeated iterative training to gradually reduce the target position adjustment amount so as to obtain the trained full-connection classification network when the target position adjustment amount reaches a set value. Wherein, a training set and a testing set are distributed to the sample image according to the proportion of 5:1, and cross validation is carried out.
When the area suggestion network adopts a Feature Pyramid Network (FPN), candidate areas with various length-width ratios are generated in each layer of the feature pyramid network, then the candidate areas are selected by utilizing non-maximum value inhibition, and the multi-scale features are utilized for detection, so that the detection of tiny false-leakage targets is particularly effective by comprehensively utilizing the bottom layer features and the high layer features.
An embodiment of the present invention provides a method for detecting a running leakage, as shown in fig. 2, the method includes:
201. the trained full-connection classification network obtained by the method is used for identifying the boundary frame of the image to be detected of the target so as to determine the detection result of the leakage.
Preferably, after the step 201, the method further comprises:
202. and digging a boundary frame of the image to be detected according to a preset rule.
203. The clipped bounding box is displayed.
Optionally, the preset rule when the clipping is performed is to clip an adjacent region of the bounding box containing the image to be detected, where the shape of the adjacent region is not limited, and may be a circular region or a rectangular region.
Preferably, the preset rule is as follows:
1) selecting the size of the digging frame as L x L, L is (MIN (M, N))/P) if the size of the boundary frame is M x N; p ═ α × M/N, α is an adjustment coefficient, and takes a value of 10.
2) Z-scan is performed starting from one of the four corners of the bounding box, and edge detection is performed on the image in the cutout box.
3) If no edge is detected, removing the image in the digging frame, continuing to scan the next digging frame along the Z scanning direction, and repeating the step 2);
4) if an edge is detected, the scan is terminated and the corner of the next bounding box is selected and steps 2) -3) are repeated until the Z-scan from the four corners of the bounding box is completely completed.
Optionally, when the boundary frame of the image to be detected is excavated, if the selected excavation scale is too large, the true edge may be excavated, and at this time, the excavated boundary frame needs to be filled up by a plurality of minimum boundary frames, so that the complete boundary frame of the image to be detected is obtained.
When the display module performs display, the extracted result can be mapped back to the original image, and the boundary frame is drawn, so that the target detection of the leakage is completed. Through the processing of the steps, the speed of detecting the leakage target is improved, and the precision of detecting the leakage target is improved.
In order to further improve the detection precision of the leakage target, the mask of the leakage target can be directly obtained on the basis of obtaining the relative position of the leakage target and the boundary frame, and pixel-level division of the leakage target is realized. In addition, smoothing filtering may be performed according to the mask to remove blocking artifacts.
Compared with the prior art, the fully-connected classification network for training the running, the falling, the dropping and the leakage detection can automatically generate a training set according to a sample image and automatically train the training set to obtain network parameters by introducing the idea of transfer learning, then the convolutional neural network and the regional suggestion network are respectively initialized according to the network parameters obtained by training, and the network parameters are determined by adopting the transfer learning method, so that the time spent on training a network model is saved, and the detection speed can be improved when the target detection is finally carried out; secondly, the convolutional neural network is trained according to the preselected area of the area suggestion network, the area suggestion network is optimized according to the trained convolutional neural network, and the candidate bounding box of the sample image is extracted through the optimized area suggestion network, so that the extracted candidate bounding box of the sample image is reasonable, can adapt to the dynamic change in the sample image, and can improve the detection precision. An apparatus for training a fully-connected classification network for running leakage detection according to an embodiment of the present invention will be described based on the related description in the embodiment of the method for training a fully-connected classification network for running leakage detection according to fig. 1. Technical terms, concepts and the like related to the above embodiments in the following embodiments may refer to the above embodiments, and are not described in detail herein.
An embodiment of the present invention provides an apparatus for training a fully-connected classification network for detecting a running fault and a leakage, as shown in fig. 3, the apparatus includes: a first determination module 301, an initialization module 302, a first training module 303, an optimization module 304, and a second training module 305, wherein:
a first determining module 301, configured to determine network parameters through transfer learning using the sample images.
An initialization module 302, configured to initialize the convolutional neural network and the area recommendation network according to the network parameters.
A first training module 303, configured to train the convolutional neural network according to the region suggestion network.
And an optimization module 304, configured to optimize the area suggestion network according to the trained convolutional neural network.
A second training module 305, configured to train a fully-connected classification network according to the sample feature map of the sample image and the candidate bounding box of the sample image; wherein the candidate bounding box of the sample image is extracted through the optimized region suggestion network.
Among them, the above-mentioned migration learning is one of machine learning techniques, and a model trained on one task is reused on another related task. According to the scheme, the network parameters are trained on the basic data set, so that huge resources which need to be consumed by a deep learning model can be avoided, the training time is saved, and the efficiency of the training model is improved.
For example, the determining module may be specifically implemented by: and (3) pre-training the convolutional neural network and the regional suggestion network by using a data set, and determining network parameters by adopting a transfer learning method. Preferably, the data set is: ImageNet, but is not limited thereto. The convolutional neural network described above may be any CNN model, such as: ResNet, Inception, or VGG.
Optionally, the apparatus 4 further comprises: an acquisition module 306, wherein:
and the acquisition module 306 is used for shooting a sample image containing the object to be detected through the camera.
Illustratively, the sample image described above is an image containing various types of leakage. Thus, when training is started, various types of leakage images are included, and the training results can identify various types of leakage targets, so that when two types of leakage exist, the two types of leakage can be identified through the content of the invention.
The above network parameters refer to weights trained on a data set, the convolutional neural network needs to be trained by data, information is obtained from the data, the data is further converted into corresponding weights, and then the weights are migrated to the convolutional neural network and the regional suggestion network, that is, feature learning is automatically performed by a method combining deep learning and migration learning. Therefore, the convolutional neural network and the area suggestion network are trained by means of a transfer learning mechanism, and a network model with generalization capability is obtained for identifying the jetting position and direction of high-pressure gas or liquid.
Preferably, the first training module 303 is specifically configured to:
a first training area is generated using an area suggestion network.
The convolutional neural network is trained based on the first training region.
Preferably, the optimization module 304 is specifically configured to:
and training the area suggestion network according to the trained convolutional neural network.
Generating a second training area using the trained area suggestion network.
And optimizing the area suggestion network according to the trained area suggestion network and the second training area.
Preferably, when the optimization module proposes a network according to the trained convolutional neural network training region, the optimization module may specifically be implemented by: initializing the area suggestion network by using the trained convolutional neural network, keeping the convolutional layer unchanged, and only adjusting the part of the trained convolutional neural network, which is different from the area suggestion network. When the optimization module optimizes the area suggestion network according to the trained area suggestion network and the second training area suggestion network, the optimization module can be specifically realized by the following contents: and keeping the convolutional layer parameters of the trained area suggestion network unchanged, and adjusting the trained area suggestion network again by using the second training area to obtain the optimized area suggestion network.
Illustratively, the optimization module mentioned above can be implemented by the following steps when the proposed network is trained according to the trained convolutional neural network training area: a third training region is generated using the trained convolutional neural network, and then the region suggestion network is trained based on the third training region.
The contents of the first training module and the optimization module train the convolutional neural network through the regional suggestion network, and then optimize the regional suggestion network through the trained convolutional neural network.
Optionally, as shown in fig. 4, the apparatus 3 further includes: an extraction module 307, wherein:
and an extracting module 307, configured to extract candidate bounding boxes of the sample image according to the optimized area suggestion network.
Illustratively, the extracting module 307 is specifically configured to:
and learning the characteristics of the sample image by adopting the trained convolutional neural network to obtain a sample characteristic diagram.
And (4) convolving the convolution kernel of the optimized area suggestion network with the sample characteristic graph to determine a candidate bounding box of the sample image.
Assuming that the size of a given input image is M × N, the size of a sample feature map obtained through the convolution operation is M × N, and there are L feature maps in total, where M >0 and N > 0. Wherein the convolutional neural network comprises a plurality of convolutional layers.
For example, for the optimized region proposed network RPN, a convolution is performed on the sample profile of m × n using a3 × 3 convolution kernel, i.e., a3 × 3 sliding window. The central point position of the convolution kernel of 3 x 3 is mapped into an original image, and 3 kinds of aspect ratio ratios (1:1, 1:2, 2:1), 3 kinds of scale scales (64, 128, 256) and 9 anchor points (English abbreviation: anchor) in total, namely a candidate boundary frame, are extracted from the central position.
Optionally, as shown in fig. 4, the apparatus 3 further includes: an output module 308, an adjustment module 309, and a second determination module 310, wherein:
an output module 308, configured to input the candidate bounding box of the sample image subjected to the regionalization processing into a fully connected classification network, and output the target classification and the bounding box position.
And an adjusting module 309, configured to adjust the position of the bounding box according to the coordinate correction.
The second determining module 310 screens the candidate bounding boxes of the adjusted sample image according to the biased sample filtering method, and determines new candidate bounding boxes of the sample image.
The correction amount is a parameter for correcting the position of the bounding box.
In the above, the candidate bounding box passing through the ROI is input into the full-link layer, and then input into the classification layer and the bounding box regression layer. The classification layer performs final target classification, that is, determines whether the sample candidate frame is a target. The bounding box regression layer is used for target boundary regression correction, namely, the abscissa x and the ordinate y of the top left corner vertex of a candidate bounding box corresponding to the mapping of the central position of a prediction convolution kernel in an original image, and the width w and the height h of the candidate bounding box. Where there is a regression of 4 x 9 positions for each central position. In actual calculation, the bounding box regression used the smoothed L1-loss (norm).
The second determining module selects the candidate bounding box with the sample deviation larger than the threshold value as a new candidate bounding box by adopting a deviation sample filtering method. Non-maxima suppression is also employed to remove candidate bounding boxes with too large an overlap region (i.e., degree of overlap) before bias sample filtering methods are used. The convolutional neural network is combined with a bias sample filtering method, so that the situation that the training result is deteriorated due to the fact that the number of positive and negative samples in the convolutional neural network is not uniform is effectively reduced.
Further optionally, as shown in fig. 4, the apparatus 3 further includes: an update module 311 and a loop execution module 312, wherein:
the updating module 311 is further configured to update the fully-connected classification network according to the sample feature map of the sample image and the new candidate bounding box of the sample image.
A loop executing module 312, configured to execute the following steps in a loop until the coordinate correction amount is a preset value: and inputting the candidate bounding box of the sample image processed by the regional pooling layer into a full-connection classification network, outputting the target classification, and outputting the position of the bounding box.
For example, the preset value is an empirical value, which can be set according to the actual picture condition, and when the correction amount is equal to the preset value, the corresponding network model of the fully-connected classification network is the most accurate.
For example, the update module and the loop execution module may be further implemented by: and taking the sample characteristic graph of the sample image and a new candidate bounding box as a new training sample, performing ROI (region of interest) processing on the sample characteristic graph and the new candidate bounding box, inputting the sample characteristic graph and the new candidate bounding box into a classification network of a full-connection layer, performing target classification and target position adjustment again, and updating parameters of the full-connection classification network through repeated iterative training to gradually reduce the target position adjustment amount so as to obtain the trained full-connection classification network when the target position adjustment amount reaches a set value. Wherein, a training set and a testing set are distributed to the sample image according to the proportion of 5:1, and cross validation is carried out.
When the area suggestion network adopts the feature pyramid network, candidate areas with various length-width ratios are generated in each layer of the feature pyramid network, then the candidate areas are restrained and selected by using non-maximum values, the multi-scale features are used for detection, and the detection of tiny false-leakage targets is particularly effective by comprehensively using the bottom layer features and the high layer features.
An embodiment of the present invention provides a device for detecting leakage, and as shown in fig. 5, the device 5 includes:
the identification module 401 is configured to identify a bounding box of an image to be detected of the target through the trained fully-connected classification network obtained by using the apparatus described above, so as to determine a detection result of the running, the impersonation, and the leakage.
Preferably, the apparatus 4 further comprises: a digging module 402, and a display module 403, wherein:
and a clipping module 402, configured to clip a bounding box of the image to be detected according to a preset rule.
And a display module 403, configured to display the clipped bounding box.
Optionally, the preset rule when the clipping is performed is to clip an adjacent region of the bounding box containing the image to be detected, where the shape of the adjacent region is not limited, and may be a circular region or a rectangular region.
Preferably, the preset rule is as follows:
1) selecting the size of the digging frame as L x L, L is (MIN (M, N))/P) if the size of the boundary frame is M x N; p ═ α × M/N, α is an adjustment coefficient, and takes a value of 10.
2) Z-scan is performed starting from one of the four corners of the bounding box, and edge detection is performed on the image in the cutout box.
3) If no edge is detected, removing the image in the digging frame, continuing to scan the next digging frame along the Z scanning direction, and repeating the step 2);
4) if an edge is detected, the scan is terminated and the corner of the next bounding box is selected and steps 2) -3) are repeated until the Z-scan from the four corners of the bounding box is completely completed.
Optionally, when the boundary frame of the image to be detected is excavated, if the selected excavation scale is too large, the true edge may be excavated, and at this time, the excavated boundary frame needs to be filled up by a plurality of minimum boundary frames, so that the complete boundary frame of the image to be detected is obtained.
When the display module performs display, the extracted result can be mapped back to the original image, and the boundary frame is drawn, so that the target detection of the leakage is completed. Through the processing of the steps, the speed of detecting the leakage target is improved, and the precision of detecting the leakage target is improved.
Compared with the prior art, the fully-connected classification network for training the running, the falling, the dropping and the leakage detection can automatically generate a training set according to a sample image and automatically train the training set to obtain network parameters by introducing the idea of transfer learning, then the convolutional neural network and the regional suggestion network are respectively initialized according to the network parameters obtained by training, and the network parameters are determined by adopting the transfer learning method, so that the time spent on training a network model is saved, and the detection speed can be improved when the target detection is finally carried out; secondly, the convolutional neural network is trained according to the preselected area of the area suggestion network, the area suggestion network is optimized according to the trained convolutional neural network, and the candidate bounding box of the sample image is extracted through the optimized area suggestion network, so that the extracted candidate bounding box of the sample image is reasonable, can adapt to the dynamic change in the sample image, and can improve the detection precision.
An embodiment of the present invention provides a computer storage medium storing a computer program for executing the above-described method.
By way of example, computer-readable storage media can be any available media that can be accessed by a computer or a data storage device, such as a server, data center, etc., that includes one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Through the above description of the embodiments, it is clear to those skilled in the art that, for convenience and simplicity of description, only the division of the functional modules is illustrated, and in practical applications, the above function distribution may be completed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the above described functions. For the specific working processes of the system, the apparatus and the unit described above, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described here again.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.