Disclosure of Invention
Based on this, it is necessary to provide a three-dimensional moving object detection method, apparatus, computer device, and storage medium in view of the above-described technical problems.
A method of three-dimensional moving object detection, the method comprising:
acquiring an original point cloud of a moving object to be detected in a scene;
performing data enhancement on the background information and the small target information in the original point cloud to obtain point cloud enhancement data;
highlighting local features of the point cloud enhancement data through a data reduction structure, and filtering irrelevant information to obtain point cloud reduction data;
extracting point cloud characteristics in the point cloud reduced data through a characteristic extraction network; integrating the change of the gradient into a feature map containing the point cloud features through a gradient integration network according to the point cloud features to obtain point cloud integration features;
performing overhead view pseudo-image processing on the original point cloud to obtain a target pseudo-image, detecting the target pseudo-image through an object recognition framework, and taking the obtained detection result as a reference index; and inputting the point cloud integrated characteristics and the reference index into a pre-trained three-dimensional target recognition network to obtain a moving target recognition result.
In one embodiment, the point cloud features include: foreground points, background point classification results and initial regression results of the foreground points and the background points on an initial candidate frame of a target; the extracting the point cloud features in the point cloud reduced data through the feature extraction network comprises: and extracting foreground points and background point classification results in the point cloud reduced data and initial regression results of the foreground points and the background points on the target initial candidate frame through a feature extraction network.
In one embodiment, the method further comprises: inputting the point cloud integrated features into a pre-trained region generation network to obtain a target initial candidate frame; performing sigmoid scoring on the foreground point classification result, and obtaining a foreground point mask according to the sigmoid scoring result; and inputting the foreground point mask, the reference index and the target initial candidate frame into a pre-trained three-dimensional target recognition network to obtain a moving target recognition result.
In one embodiment, the method further comprises: generating a data set for storing true value information according to the point cloud reduced data, wherein the data set comprises file names of true value frames, target object types, target object 3D information and point cloud data information in the true value frames; the truth box corresponds to a first candidate box.
In one embodiment, the method further comprises: processing the target initial candidate frame through interval-based carefully chosen operation according to the initial regression result to obtain a first candidate frame; combining and comparing the first candidate frame with the truth frame, and screening to obtain a second candidate frame; and inputting a pre-trained three-dimensional target recognition network according to the foreground point mask, the reference index and the second candidate frame to obtain a moving target recognition result.
In one embodiment, the method further comprises: the foreground point mask is a foreground point with the sigmoid scoring result being greater than a threshold value.
A three-dimensional moving object detection apparatus, the apparatus comprising:
the point cloud acquisition module is used for acquiring original point cloud of a target to be detected in a scene;
the point cloud enhancement module is used for carrying out data enhancement on the background information and the small target information in the original point cloud to obtain point cloud enhancement data;
and the point cloud simplifying module is used for highlighting local characteristics of the point cloud enhanced data through the data simplifying structure and filtering irrelevant information to obtain the point cloud simplified data.
The feature extraction module is used for extracting the point cloud features in the point cloud reduced data through a feature extraction network; and integrating the change of the gradient into a feature map containing the point cloud features through a gradient integration network according to the point cloud features to obtain the point cloud integration features.
The three-dimensional target recognition module is used for performing overhead view pseudo-image processing on the original point cloud to obtain a target pseudo-image, detecting the target pseudo-image through an object recognition frame, and taking the obtained detection result as a reference index; and inputting the point cloud integrated characteristics and the reference index into a pre-trained three-dimensional target recognition network to obtain a moving target recognition result.
A computer device comprising a memory storing a computer program and a processor which when executing the computer program performs the steps of:
acquiring an original point cloud of a moving object to be detected in a scene;
performing data enhancement on the background information and the small target information in the original point cloud to obtain point cloud enhancement data;
highlighting local features of the point cloud enhancement data through a data reduction structure, and filtering irrelevant information to obtain point cloud reduction data;
extracting point cloud characteristics in the point cloud reduced data through a characteristic extraction network; integrating the change of the gradient into a feature map containing the point cloud features through a gradient integration network according to the point cloud features to obtain point cloud integration features;
performing overhead view pseudo-image processing on the original point cloud to obtain a target pseudo-image, detecting the target pseudo-image through an object recognition framework, and taking the obtained detection result as a reference index; and inputting the point cloud integrated characteristics and the reference index into a pre-trained three-dimensional target recognition network to obtain a moving target recognition result.
A computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
acquiring an original point cloud of a moving object to be detected in a scene;
performing data enhancement on the background information and the small target information in the original point cloud to obtain point cloud enhancement data;
highlighting local features of the point cloud enhancement data through a data reduction structure, and filtering irrelevant information to obtain point cloud reduction data;
extracting point cloud characteristics in the point cloud reduced data through a characteristic extraction network; integrating the change of the gradient into a feature map containing the point cloud features through a gradient integration network according to the point cloud features to obtain point cloud integration features;
performing overhead view pseudo-image processing on the original point cloud to obtain a target pseudo-image, detecting the target pseudo-image through an object recognition framework, and taking the obtained detection result as a reference index; and inputting the point cloud integrated characteristics and the reference index into a pre-trained three-dimensional target recognition network to obtain a moving target recognition result.
According to the three-dimensional moving object detection method, the three-dimensional moving object detection device, the computer equipment and the storage medium, the characteristics of the background and the small object can be enriched by carrying out data enhancement processing on the original point cloud; local characteristics are enhanced through the data simplifying structure, irrelevant information is filtered, point cloud simplifying data is obtained, and calculation complexity can be reduced on the basis of guaranteeing information integrity; the point cloud characteristics of the point cloud simplified data are extracted through the characteristic extraction network, and the characteristic gradient information in network optimization is enhanced through the gradient integration network, so that the point cloud integrated data are obtained, and the accuracy of a detection result can be ensured while the network calculation amount is reduced; and acquiring a priori detection result of the pseudo image through an object recognition framework, and inputting the priori detection result serving as a reference index into a pre-trained three-dimensional target recognition network by combining the point cloud integration data, so that a detection result of the moving target can be obtained. According to the embodiment of the invention, the detection accuracy and the robustness of the target detection algorithm in the three-dimensional point cloud data are further improved, the practical application efficiency in related application scenes is correspondingly improved, the three-dimensional information acquisition sensor can be used for acquiring the moving target point cloud in real time, the existence condition and the type of the target in the detection view field are analyzed, and the detection of the moving target is realized.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
In one embodiment, as shown in fig. 1, there is provided a three-dimensional moving object detection method, including the steps of:
step 102, acquiring an original point cloud of a moving object to be detected in a scene.
Step 102 is realized by acquiring data of a scene where a target is located through three-dimensional information acquisition sensors such as millimeter wave radar, laser radar, TOF camera and the like.
And 104, carrying out data enhancement on the background information and the small target information in the original point cloud to obtain point cloud enhancement data.
The data enhancement is mainly used for reducing the over-fitting phenomenon of the network, and the network with stronger generalization capability can be obtained by transforming the training pictures, so that the network is better suitable for application scenes. The Mosaic structure is one of the data enhancement structures, and the data enhancement is performed through the Mosaic structure, so that the background and small target characteristics of original point cloud data can be enriched, the running time of a network is reduced, and the robustness of the network is improved.
And 106, highlighting local features of the point cloud enhancement data through the data reduction structure, and filtering irrelevant information to obtain the point cloud reduction data.
The data compaction structure can be used for compacting the point cloud enhancement data, the data compaction structure comprises a Focus structure, the Focus structure performs slicing operation on the point cloud enhancement data, a feature extraction network can conveniently extract information by slicing, meanwhile, a downsampling feature map without information loss is obtained, local features can be highlighted, and therefore compaction of the data is achieved.
Step 108, extracting point cloud characteristics in the point cloud reduced data through a characteristic extraction network; and integrating the change of the gradient into a feature map containing the point cloud features through a gradient integration network according to the point cloud features to obtain the point cloud integration features.
The feature extraction network comprises a PointNet++ and a feature weight extraction mechanism, wherein the PointNet++ specifically selects a series of points in input point cloud reduced data through a sampling layer, thereby defining the center of a local area, constructing the local area, further extracting features, the PointNet can well extract the features of the point cloud, the PointNet is used as a sub-network in the PointNet++, and the features are extracted in a hierarchical iteration mode. The PointNet++ can make the network structure more efficient and robust by utilizing local region information learning features through the hierarchical structure. The gradient integration network comprises a CSPNet (Cross Stage Partial Network, cross-stage local network), the CSPNet enables gradient flows to propagate through different network paths by dividing the gradient flows, variability of gradients is respected by integrating feature graphs of the beginning and the end of network stages, richer gradient combination is achieved, meanwhile, calculated amount is reduced, the obtained point cloud integration features are used as input of the three-dimensional target recognition network, and the operation efficiency of RCNN can be improved.
Step 110, performing overhead view pseudo-image processing on the original point cloud to obtain a target pseudo-image, detecting the target pseudo-image through an object recognition frame, and taking the obtained detection result as a reference index; and inputting the point cloud integrated characteristics and the reference indexes into a pre-trained three-dimensional target recognition network to obtain a moving target recognition result.
In general, point cloud data obtained from a lidar is expressed in terms of X, Y, Z three-dimensional coordinates and reflection intensity i, and the point cloud data is scattered from a bird's eye view into a uniformly divided grid on an X-Y plane to obtain a columnar set P, and the points in each pillar are increased by X c ,Y c ,Z c ,X p And Y p Equal parameters (where the c subscript indicates the distance of the point from the arithmetic mean of all points in the strut and the p subscript indicates the offset of the point from the center of the strut) such that each point in the point cloud data has nine dimensions. A tensor of size (D, P, N) can be created by imposing a certain limit on the number of non-empty struts per sample and the number of points in each strut. In this process, if there is too much data in the column, random sampling is reduced, if there is too little data, filling with 0 is performed, pointNet can process and extract features from the tensed point cloud data to generate tensors of (C, P, N), and using the max pooling operation for the channels, tensors of (C, P) can be obtained, and after encoding, the features are scattered back to the original post positions, thus creating a pseudo image of (C, H, W) size. Taking the false image detection result as a reference result and improving the PointRCNN outputAnd comparing the prediction results, and providing certain aerial view angle reference information for determining target positions of some complex scenes, so as to further improve the detection accuracy.
The object recognition framework includes Yolo V5, and Yolo V5 delivers each batch of training data via a data loader, and simultaneously augments the training data, the data loader can perform three types of data enhancements: scaling, color space adjustment and mosaic enhancement, performing prior detection on targets in the pseudo-images through an object recognition framework, taking detection results as reference indexes, and taking the reference indexes as inputs of a pre-training three-dimensional target recognition network. The three-dimensional object recognition network refers to an improved PointRCNN, the improved PointRCNN can regress a detection result with higher precision from a plurality of candidate frames, and the detection accuracy and the robustness can be further improved by inputting a reference index.
In the three-dimensional moving object detection method, the characteristics of the background and the small object can be enriched by carrying out data enhancement processing on the original point cloud; local characteristics are enhanced through the data simplifying structure, irrelevant information is filtered, point cloud simplifying data is obtained, and calculation complexity can be reduced on the basis of guaranteeing information integrity; the point cloud characteristics of the point cloud simplified data are extracted through the characteristic extraction network, and the characteristic gradient information in network optimization is enhanced through the gradient integration network, so that the point cloud integrated data are obtained, and the accuracy of a detection result can be ensured while the network calculation amount is reduced; the prior detection result of the pseudo image is obtained through the object recognition framework, and is used as a reference index to be input into a pre-trained three-dimensional target recognition network in combination with point cloud integration data, so that the detection result of the moving target can be obtained. According to the embodiment of the invention, the cloud of the moving target point can be acquired in real time by utilizing the three-dimensional information acquisition sensor, and the existence condition and the kind of the target in the detection visual field are analyzed, so that the detection of the moving target is realized.
In one embodiment, acquiring an original point cloud of an object to be detected in a scene includes: and carrying out data acquisition on a scene where the target to be detected is located through three-dimensional point cloud data acquisition equipment, adding a negative sample to remove irrelevant targets in the scene, and obtaining the original point cloud of the target to be detected. In this embodiment, a negative sample is added to solve the problem of false detection of irrelevant targets in a complex scene, and currently, a point cloud target detection algorithm is more applied to the field of automatic driving, wherein, most importantly, a vehicle, a lane and an obstacle are highlighted, in order to accurately identify object information, a negative sample can be further added for training, for example, in the process of detecting the vehicle target, a point cloud data negative sample similar to a container, a news stand and the like of the vehicle can be added, and the negative sample can be selected according to different application scenes.
In one embodiment, the point cloud features include: foreground points, background point classification results and initial regression results of the foreground points and the background points on the initial candidate frames of the targets; extracting the point cloud features in the point cloud reduced data through the feature extraction network comprises: and extracting foreground points and background point classification results in the point cloud reduced data and initial regression results of the foreground points and the background points on the initial candidate frames of the targets through the feature extraction network.
In this embodiment, foreground points refer to points belonging to a target category, while ground points, shrubs or house points belong to background points, the purpose of obtaining classification results of the foreground points and the background points is to distinguish foreground objects, one problem of being very critical in distinguishing foreground objects is to determine a proper background, and from the perspective of pixels, each pixel is possibly a foreground point or a background point, so that classification is needed to prevent the background from being mistakenly input into an object originally belonging to the foreground point, and a point cloud is divided into foreground and background objects to generate a small amount of high-quality 3D target initial candidate frames, and masks of the foreground points can be obtained through the foreground points; the target initial candidate frame is obtained by integrating the point clouds into data and inputting the data into the regional generation network (Region Proposal Network, RPN), and the foreground points can be regressed through the initial regression result, so that the number of the point clouds is reduced. In the foreground point cloud segmentation stage, foreground points are few, background points are more, the number of the foreground points is very unbalanced, the weight occupied by a large number of simple negative samples in training can be reduced through focal loss, and a loss function formula is as follows:
ζ focal =-α t (1-p t ) γ log(p t )
wherein,,
where α is a balance factor, γ is an adjustable factor, t represents the complexity of the sample, and α=0.25 and γ=2 are set in the above formula. Specifically, the point cloud features of foreground points acquire 3D frames of the target initial candidate frame through RPN, and each foreground point returns to a 3D target frame.
In one embodiment, inputting the point cloud integrated features and reference indices into a pre-trained three-dimensional object recognition network includes: inputting the point cloud integrated features into a pre-trained region generation network to obtain a target initial candidate frame; performing sigmoid scoring on the foreground point classification result, and obtaining a foreground point mask according to the sigmoid scoring result; and inputting the foreground point mask, the reference index and the target initial candidate frame into a pre-trained three-dimensional target recognition network to obtain a moving target recognition result.
In this embodiment, the point cloud is integrated into data to obtain an initial target candidate frame, and the initial target candidate frame is used for further screening in the input modified PointRCNN, and broadly, masking refers to masking a processed image with a selected image, graph or object to control the image processing area or processing process, in this embodiment, pixel filtering is performed on the image through an n×n matrix, so that the foreground is highlighted, and the matrix is a mask, so that the modified PointRCNN only considers 3D frames of foreground points, and filters 3D frames of non-foreground points.
In one embodiment, prior to entering the point cloud integrated features and reference metrics into the pre-trained target recognition network, it comprises: generating a data set for storing true value information according to the point cloud reduced data, wherein the data set comprises file names of true value boxes, target object types, target object 3D information and point cloud data information in the true value boxes; the truth box corresponds to the first candidate box.
In one embodiment, inputting the foreground point mask, the reference index and the target initial candidate box into a pre-trained three-dimensional target recognition network, and obtaining a moving target recognition result comprises: processing the target initial candidate frame through a carefully selected operation based on the interval according to the initial regression result to obtain a first candidate frame; combining and comparing the first candidate frame with the truth frame, and screening to obtain a second candidate frame; and inputting the pre-trained three-dimensional target recognition network according to the foreground point mask, the reference index and the second candidate frame to obtain a moving target recognition result.
In this embodiment, the initial regression results are grouped on the coordinate axis based on the refinement operation of the interval (bin), and when the data is grouped, the data becomes more stable, so that the number of target initial candidate frames is reduced, and the first candidate frames are obtained. In a bin-based process, its loss function is:
wherein pos is a foreground point set, N pos Is the number of foreground points and,is a prediction box obtained after the bin process,is->Difference from foreground point prediction frame, +.>And->Is the true value target, ++>For cross entropy class loss, < >>To smooth the L1 loss, u is a set of parameters x, z, θ, where x, y are coordinates of the target center point, θ is a predicted frame direction angle at the bird's eye view, v is a set of parameters y, h, w, L, where y is y-axis coordinates of the target center point, and h, w, L represent dimensions of the predicted frame, respectively representing length, width, and height. In the modified PointRCNN, the loss function is:
wherein,,is a set of target initial candidate boxes, +.>Then is used to store the first candidate box, prob, obtained after regression i Is b i Confidence of b i Is the initial candidate frame of the target, label i Is the degree of coincidence of the first candidate box and the truth box,/-)>Is a cross entropy loss of supervised prediction confidence, < >>To smooth the L1 loss, u is a set of parameters x, z, θ, where x, y are coordinates of the target center point, θ is the predicted frame direction under the bird's eye view angle,/>For the truth box, v is the set of parameters y, h, w, l, where y is the y-axis coordinate of the target center point, h, w, l denote the dimensions of the prediction box, and length, width, and height, respectively.
In one embodiment, the specific steps of training the modified PointRCNN are:
s10: the three-dimensional point cloud data set is widely collected, and a large number of experimental scenes are collected through the point cloud data collection device, wherein the experimental scenes comprise: road vehicle point cloud scene, road pedestrian point cloud scene, traffic identification point cloud data, and the like.
S20: and performing pseudo-image processing on the original point cloud to obtain a pseudo-image of the aerial view, inputting the pseudo-image into a Yolo V5 network, and generating a priori pseudo-image detection result.
S30: and carrying out data enhancement processing on the original point cloud through the Mosaic structure, and improving background information and small target information in the data to obtain point cloud enhancement data.
S40: taking PointNet++ as a backbox, adding a focus structure and a CSPNet structure into the backbox to perform data reduction on point cloud enhancement data, reserving important data information, and inputting the data into the backbox to obtain initial foreground points, background point classification results and regression results of each point.
S50: and generating a file storing the truth information of the data set, wherein the file comprises the file name of the truth box, the category of the target object, the 3D information of the target object, the point cloud data in the truth box and the like.
S60: inputting the point cloud integrated characteristics into an RPN network, and performing network training to obtain a target initial candidate frame of a target object in the point cloud data.
S70: and inputting the target initial candidate frame into the improved PointRCNN to further reduce redundant frames, and simultaneously combining the detection result of the Yolo V5 on the pseudo image to output a final target detection frame.
In a specific embodiment, as shown in fig. 2, a flow diagram of a three-dimensional moving object detection method is provided, the method can directly process an obtained original point cloud and generate a detection result, specifically, data enhancement is performed on the original point cloud through a Mosaic structure to obtain point cloud enhancement data, lightweight processing is performed on the point cloud enhancement data through a Focus structure to obtain point cloud reduced data, feature extraction is performed on the point cloud reduced data through a PointNet++ and feature weight extraction mechanism, feature gradient information in network optimization is integrated into a feature map through CSPNet to obtain point cloud integrated data, the point cloud integrated data is input into a pre-trained RPN network to generate an initial target candidate frame, simultaneously, overhead view pseudo-image processing is performed on the original point cloud, a Yolo V5 structure is adopted to detect a target in the pseudo-image, the generated pseudo-image prior result is used as a reference index, redundant frames in the initial target candidate frame are further reduced through improvement of PointRCNN, and the reference index is input into an improved PointRCNN to generate a target detection frame.
It should be understood that, although the steps in the flowcharts of fig. 1-2 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1-2 may include multiple sub-steps or phases that are not necessarily performed at the same time, but may be performed at different times, nor does the order in which the sub-steps or phases are performed necessarily occur sequentially, but may be performed alternately or alternately with at least a portion of the sub-steps or phases of other steps or other steps.
In one embodiment, there is provided a three-dimensional moving object detection apparatus including: a point cloud acquisition module 302, a point cloud enhancement module 304, a point cloud compaction module 306, a feature extraction module 308, and a three-dimensional object recognition module 310, wherein:
the point cloud acquisition module 302 is configured to acquire an original point cloud of a target to be detected in a scene;
the point cloud enhancement module 304 is configured to perform data enhancement on background information and small target information in an original point cloud to obtain point cloud enhancement data;
the point cloud simplifying module 306 is configured to highlight local features of the point cloud enhancement data through the data simplifying structure, and filter irrelevant information to obtain the point cloud simplifying data.
The feature extraction module 308 is configured to extract point cloud features in the point cloud reduced data through a feature extraction network; and integrating the change of the gradient into a feature map containing the point cloud features through a gradient integration network according to the point cloud features to obtain the point cloud integration features.
The three-dimensional target recognition module 310 is configured to perform overhead view pseudo-image processing on the original point cloud to obtain a target pseudo-image, detect the target pseudo-image through the object recognition framework, and use the obtained detection result as a reference index; and inputting the point cloud integrated characteristics and the reference indexes into a pre-trained three-dimensional target recognition network to obtain a moving target recognition result.
In one embodiment, the point cloud obtaining module 302 is further configured to perform data collection on a scene where the target to be detected is located through a three-dimensional point cloud data collection device, and add a negative sample to remove irrelevant targets in the scene, so as to obtain an original point cloud of the target to be detected.
In one embodiment, the feature extraction module 308 is further configured to extract, via the feature extraction network, point cloud features in the point cloud reduced data, including: and extracting foreground points and background point classification results in the point cloud reduced data and initial regression results of the foreground points and the background points on the initial candidate frames of the targets through the feature extraction network.
In one embodiment, the three-dimensional target recognition module 310 is further configured to input the point cloud integrated feature into a pre-trained region generation network to obtain a target initial candidate frame; performing sigmoid scoring on the foreground point classification result, and obtaining a foreground point mask according to the sigmoid scoring result; and inputting the foreground point mask, the reference index and the target initial candidate frame into a pre-trained three-dimensional target recognition network to obtain a moving target recognition result.
In one embodiment, the method is further used for generating a data set for storing true value information according to the point cloud reduced data, wherein the data set comprises file names of the true value boxes, target object types, target object 3D information and point cloud data information in the true value boxes; the truth box corresponds to the first candidate box.
In one embodiment, the three-dimensional object recognition module 310 is further configured to process the initial candidate frame of the object through a refinement operation based on the interval according to the initial regression result, so as to obtain a first candidate frame; combining and comparing the first candidate frame with the truth frame, and screening to obtain a second candidate frame; and inputting the pre-trained three-dimensional target recognition network according to the foreground point mask, the reference index and the second candidate frame to obtain a moving target recognition result.
In one embodiment, the foreground mask is a foreground whose sigmoid score is greater than a threshold.
For the specific limitation of the three-dimensional moving object detection device, reference may be made to the limitation of the three-dimensional moving object detection method hereinabove, and the description thereof will not be repeated. The respective modules in the above three-dimensional moving object detection apparatus may be realized in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a terminal, and the internal structure of which may be as shown in fig. 4. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a three-dimensional moving object detection method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, can also be keys, a track ball or a touch pad arranged on the shell of the computer equipment, and can also be an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the structures shown in FIG. 4 are block diagrams only and do not constitute a limitation of the computer device on which the present aspects apply, and that a particular computer device may include more or less components than those shown, or may combine some of the components, or have a different arrangement of components.
In an embodiment a computer device is provided comprising a memory storing a computer program and a processor implementing the steps of the method of the above embodiments when the computer program is executed.
In one embodiment, a computer readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements the steps of the method of the above embodiments.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the various embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.