Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 provides a computer device, which may specifically include: the device comprises a processor, a memory, a camera and a display screen, wherein the components can be connected through a bus or in other ways, and the application is not limited to the specific way of the connection. In practical applications, the computer device may be a personal computer, a server, a tablet computer, or the like.
Referring to fig. 2, fig. 2 provides a data processing method of a communication data center, which may be executed by a computer device, which may be disposed in the communication data center, and which may be structured as shown in fig. 1, and the method shown in fig. 2 includes the following steps:
step S201, computer equipment of a communication data center receives backup data sent by a notebook computer;
the backup data sent by the notebook computer can be sent through a mobile communication network, for example, through 5G, but also can be sent through other communication modes, for example, wifi or wired network.
Step S202, computer equipment of the communication data center calls a classifier to perform classification processing on backup data to obtain the type of the backup data, if the backup data is determined to belong to data of a video type, the backup data is divided into video processors to be synthesized and mapped to obtain synthesized data and a mapping relation;
the classifier may adopt a general classifier, such as a support vector machine, and of course, the type of the backup data may be obtained by performing classification processing through a deep neural network model.
The above data type may also be determined as video data by other means, for example, by the storage format of the backup data, and the format of the video data is, for example: MPEG, AVI, nAII, ASF, MOV, 3GP, WMV, DivX, XVID, RM, etc.
Step S203, the computer equipment of the communication data center stores the synthetic data and the mapping relation.
According to the technical scheme, computer equipment of a communication data center receives backup data sent by a notebook computer; computer equipment of the communication data center calls a classifier to perform classification processing on the backup data, if the backup data is determined to belong to video type data, the backup data is divided into video processors to be synthesized and mapped to obtain synthesized data and a mapping relation; the computer equipment of the communication data center stores the composite data and the mapping relationship. The technical scheme of the application only stores the synthetic data and the mapping relation, and compared with the data storage of all frames of backup data, only stores the synthetic data and the mapping relation, the occupied space of data storage can be reduced, and the data storage cost is reduced.
In an optional scheme, dividing the backup data into video processors for synthesizing and mapping to obtain synthesized data and a mapping relationship may specifically include:
the backup data are arranged in ascending order according to the number of frames to obtain a first sequence, RGB values of pictures of all frames in the first sequence form a three-dimensional matrix, and the three-dimensional matrix of the first picture of the first sequence is extracted1And a three-dimensional matrix of a second picture2(ii) a Combining three-dimensional matrices1The RGB values of the first column of pixels (i.e. the three-dimensional matrix)1The first column element value in three depth directions) to obtain a partial three-dimensional matrix1(ii) a By part of a three-dimensional matrix1Is of a basic size1From three-dimensional matrices2Cutting to the basic size1Same plurality of partial three-dimensional matrices2(ii) a Separately computing partial three-dimensional matrices1With a plurality of partial three-dimensional matrices2Obtaining a plurality of difference matrices if the difference matrix in the plurality of difference matrices1Is greater than a quantity threshold value, determining a difference matrix1Corresponding partial three-dimensional matrix2 1And part of the three-dimensional matrix1Similarly, a three-dimensional matrix is formed2Partial three-dimensional matrix of2 1Adding the cutting data to the three-dimensional matrix1Obtaining intermediate synthetic data; performing a composition operation on a third frame of picture in the first sequenceThe synthesis operation specifically comprises the following steps: combining the RGB values of the third frame of picture into a three-dimensional matrix3Cutting the RGB values of the front n columns or the rear n columns to obtain a plurality of partial three-dimensional matrixes3Forming a three-dimensional matrix of a plurality of portions3A partial three-dimensional matrix of3Cutting m intermediate partial matrices from the intermediate composite data as basic sizes, and dividing a partial three-dimensional matrix3Respectively carrying out difference operation with the m intermediate part matrixes to obtain m difference value matrixes, and if the difference value matrixes in the m difference value matrixes are different, obtaining the m difference value matrixesxIs greater than the number threshold, the three-dimensional matrix is divided into three-dimensional matrix3Difference matrix inxCorresponding partial three-dimensional matrix3Adding the cut data to the intermediate synthesized data, and performing a synthesizing operation on the pictures of the remaining frames in the first sequence once to obtain final synthesized data; and (3) taking the RGB values corresponding to the pixels forming the front y rows of the three-dimensional matrix by the RGB values of each frame of picture in the first sequence as index values, establishing a mapping relation between the frame number and the index values, storing the synthesized data and the mapping relation, and deleting the backup data. The above n is 1, 2 or 3. Y is 1, 2 or 3. Partial three-dimensional matrix2 1Subscript 2 in (1) indicates the frame number and superscript 1 indicates the number of the partial three-dimensional matrix.
The video data of the backup data may be video data of a fixed scene moving in a single direction, such as video data of a second-hand house moving in a single direction, and also such as video data of a scenery moving in a single direction, in which the pixel points of most data frames are the same, and the row values and column values of the pixel points of each frame of the photographed picture are the same, in which case the video data are synthesized to obtain synthesized data, and then the mapping relationship can obtain data of a corresponding frame, and the extraction method specifically may include: determining a current frame number i, obtaining an index value i corresponding to the frame number i according to a mapping relation, performing volume difference operation on the index value i in synthetic data by using a volume difference kernel to obtain a plurality of volume difference values, determining volume difference input corresponding to one volume difference value as pixel data of an initial y row corresponding to the frame number i if the number of zero elements of the volume difference value is greater than a number threshold, extracting the volume difference input from the synthetic data, and determining an RGB value of a set size (the set size can be an RGB value corresponding to a z row of pixel) as an RGB value of a pixel corresponding to the frame number i.
The volume difference operation may specifically be that a volume difference kernel (i.e., an index value i) is used as a window size, the synthesized data is cut based on the window size to obtain RGB values of y rows of pixels, each time the synthesized data is cut, a matrix corresponding to the RGB values of the y rows of pixels is determined as a volume difference input, and the volume difference input and the index value i are subjected to difference operation to obtain a volume difference value.
Referring to fig. 3, fig. 3 is a schematic diagram of composite data and a frame picture, since a shooting scene is fixed (no moving object), when the shooting scene moves in a single direction, the scene actually moves in the composite data, and an arrow in fig. 3 is a start column of pixel points of each frame of data, which takes uniform movement as an example. Referring to fig. 4, fig. 4 is a schematic diagram of an RGB three-dimensional matrix of a picture, where RGB values corresponding to pixels in a first y row may be first y element values in the three-dimensional matrix of fig. 4, such as gray display in fig. 4, and each square in fig. 4 represents an element value corresponding to an R value, a G value, or a B value of a pixel, so that when extracting, RGB values of all pixels in the frame can be rapidly extracted only by comparing and determining positions of an initial pixel row corresponding to a frame number, and then the frame picture is displayed. The synthesized data can not store the data of repeated pixel points, so that the data volume of the video can be reduced, and the cost of data storage is reduced.
Embodiments of the present application also provide a computer storage medium, wherein the computer storage medium stores a computer program for electronic data exchange, and the computer program enables a computer to execute part or all of the steps of any one of the methods as described in the above method embodiments.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the methods as recited in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.