Disclosure of Invention
In view of the above, embodiments of the present disclosure provide an optical flow calculation method, an optical flow calculation apparatus, and an electronic device, which at least partially solve the problems in the prior art.
In a first aspect, an embodiment of the present disclosure provides an optical flow calculation method, including:
inputting a first image set with a time interval of [ t-N, t + N ] in a target video into a first prediction network to obtain first optical flow information, wherein N is a numerical value smaller than t;
based on the first optical flow information, performing optical flow calculation on a second image set with a time interval of [ t-M, t + M ] in the target video by using a second prediction network in series with the first prediction network to obtain second optical flow information, wherein M is a numerical value smaller than N;
adjusting M, N and t values to make all video frames in the target video undergo optical flow calculation through the first prediction network and the second prediction network;
after the video frames are subjected to optical flow calculation through the first prediction network and the second prediction network, the optical flow value of the target video is determined based on second optical flow information obtained through the second prediction network.
According to a specific implementation manner of the embodiment of the present disclosure, the determining an optical flow value of the target video based on second optical flow information obtained by the second prediction network includes:
based on the second optical flow information, performing optical flow calculation on a third image set with a time interval of [ t-L, t + L ] in the target video by using a third prediction network in series with the second prediction network to obtain third optical flow information, wherein L is a value smaller than M;
determining optical flow values for the target video based on the third optical flow information.
According to a specific implementation manner of the embodiment of the present disclosure, after the video frames are subjected to optical flow calculation through the first prediction network and the second prediction network, and then the optical flow value of the target video is determined based on second optical flow information obtained through the second prediction network, the method includes:
setting different loss functions for the first prediction network and the second prediction network;
training the first predictive network and the second predictive network based on the loss function;
and calculating the optical flow information of the video to be predicted by utilizing the trained first prediction network and the trained second prediction network.
According to a specific implementation manner of the embodiment of the present disclosure, the inputting a first image set with a time interval of [ t-N, t + N ] in a target video into a first prediction network to obtain first optical flow information includes:
setting an image association layer in the first prediction network;
extracting image features of the first image set based on the image association layer;
determining the correlation of the extracted image features of the first image set in a spatial convolution operation mode;
determining whether to compute the first optical flow information based on a correlation of image features of the first set of images.
According to a specific implementation manner of the embodiment of the present disclosure, the inputting a first image set with a time interval of [ t-N, t + N ] in a target video into a first prediction network to obtain first optical flow information includes:
setting a plurality of deconvolution ReLU layers in the first prediction network;
and for each deconvolution ReLU layer, inputting the output of the layer before the deconvolution ReLU layer, and simultaneously inputting the predicted low-scale optical flow of the layer before the deconvolution ReLU layer and the feature layer in the corresponding module.
According to a specific implementation manner of the embodiment of the present disclosure, the performing, based on the first optical flow information, optical flow calculation on a second image set with a time interval [ t-M, t + M ] in a target video by using a second prediction network in series with the first prediction network includes:
setting a plurality of convolutional layers in the second prediction network;
performing image feature extraction on the second image set based on the plurality of convolutional layers;
determining optical flow information for the second set of images based on the extracted features of the second set of images.
According to a specific implementation manner of the embodiment of the present disclosure, the performing image feature extraction on the second image based on the plurality of convolutional layers includes:
arranging a plurality of convolution layers in a serial connection mode;
arranging sampling layers among the plurality of convolutional layers connected in series, wherein the number of the sampling layers is one less than that of the convolutional layers;
and taking the final result obtained by calculating the convolution layer and the sampling layer which are sequentially and serially arranged as the image characteristic of the second image set.
According to a specific implementation manner of the embodiment of the present disclosure, the performing, based on the first optical flow information, optical flow calculation on a second image set with a time interval [ t-M, t + M ] in a target video by using a second prediction network in series with the first prediction network includes:
acquiring a first feature matrix and a second feature matrix representing the first optical flow information and the second image set respectively;
normalizing the first feature matrix and the second feature matrix to obtain a third feature matrix;
and taking the third feature matrix as the input of the second prediction network to predict the optical flow information of the second image set.
According to a specific implementation manner of the embodiment of the present disclosure, the predicting optical flow information of the second image set by using the third feature matrix as an input of the second prediction network includes:
calculating the third feature matrix by using a convolution layer, a batch normalization layer and a ReLu layer which are serially arranged in the second prediction network to obtain an optical flow calculation result;
and taking the optical flow calculation result as second optical flow information predicted by the second prediction network.
According to a specific implementation manner of the embodiment of the present disclosure, before the first image set with a time interval of [ t-N, t + N ] in the target video is input into the first prediction network and the first optical flow information is obtained, the method further includes:
using the formula v (out) v (in)γPerforming image correction on the images in the first image set, wherein v (in) is an image before correction, v (out) is an image after correction, and gamma is a correction coefficient between 0 and 1.
In a second aspect, an embodiment of the present disclosure provides an optical flow calculation apparatus, including:
the first input module is used for inputting a first image set with a time interval of [ t-N, t + N ] in a target video into a first prediction network to obtain first optical flow information, wherein N is a numerical value smaller than t;
the second input module is used for carrying out optical flow calculation on a second image set with a time interval of [ t-M, t + M ] in the target video by utilizing a second prediction network in series with the first prediction network based on the first optical flow information to obtain second optical flow information, wherein M is a numerical value smaller than N;
an adjusting module, configured to perform optical flow calculation on all video frames in the target video through the first prediction network and the second prediction network by adjusting M, N and t values;
and the execution module is used for determining the optical flow value of the target video based on second optical flow information obtained by the second prediction network after the optical flow calculation is carried out on the video frames through the first prediction network and the second prediction network.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the optical flow calculation method of any one of the first aspects or any implementation manner of the first aspect.
In a fourth aspect, the disclosed embodiments also provide a non-transitory computer-readable storage medium storing computer instructions for causing a computer to perform the optical flow calculation method in the first aspect or any implementation manner of the first aspect.
In a fifth aspect, the present disclosure also provides a computer program product including a computer program stored on a non-transitory computer-readable storage medium, the computer program including program instructions that, when executed by a computer, cause the computer to perform the optical flow calculation method in the first aspect or any implementation manner of the first aspect.
The optical flow calculation scheme in the embodiment of the disclosure comprises the steps of inputting a first image set with a time interval of [ t-N, t + N ] in a target video into a first prediction network to obtain first optical flow information, wherein N is a numerical value smaller than t;
based on the first optical flow information, performing optical flow calculation on a second image set with a time interval of [ t-M, t + M ] in the target video by using a second prediction network in series with the first prediction network to obtain second optical flow information, wherein M is a numerical value smaller than N; adjusting M, N and t values to make all video frames in the target video undergo optical flow calculation through the first prediction network and the second prediction network; after the video frames are subjected to optical flow calculation through the first prediction network and the second prediction network, the optical flow value of the target video is determined based on second optical flow information obtained through the second prediction network. . By the scheme, the optical flow information of the image can be accurately calculated.
Detailed Description
The embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
The embodiments of the present disclosure are described below with specific examples, and other advantages and effects of the present disclosure will be readily apparent to those skilled in the art from the disclosure in the specification. It is to be understood that the described embodiments are merely illustrative of some, and not restrictive, of the embodiments of the disclosure. The disclosure may be embodied or carried out in various other specific embodiments, and various modifications and changes may be made in the details within the description without departing from the spirit of the disclosure. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
It is noted that various aspects of the embodiments are described below within the scope of the appended claims. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the disclosure, one skilled in the art should appreciate that one aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method practiced using any number of the aspects set forth herein. Additionally, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to one or more of the aspects set forth herein.
It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present disclosure, and the drawings only show the components related to the present disclosure rather than the number, shape and size of the components in actual implementation, and the type, amount and ratio of the components in actual implementation may be changed arbitrarily, and the layout of the components may be more complicated.
In addition, in the following description, specific details are provided to facilitate a thorough understanding of the examples. However, it will be understood by those skilled in the art that the aspects may be practiced without these specific details.
The embodiment of the disclosure provides an optical flow calculation method. The optical flow calculation method provided by the present embodiment may be executed by a calculation apparatus, which may be implemented as software, or implemented as a combination of software and hardware, and which may be integrally provided in a server, a terminal device, or the like.
Referring to fig. 1, an optical flow calculation method provided by the embodiment of the present disclosure includes the following steps:
s101, inputting a first image set with a time interval of [ t-N, t + N ] in a target video into a first prediction network to obtain first optical flow information, wherein N is a numerical value smaller than t.
The target video is recorded with information of different objects for a period of time, and by analyzing movement information of objects existing in the target video, optical flow information of objects contained in the target video can be determined.
The first image set is an image captured for the same or similar scene in a case where the time interval is short. As one case, the first set of images may be a plurality of video frames that are adjacent in a piece of video. Since the images in the first image set are in a neighboring state in the time dimension, optical flow information of neighboring images in neighboring time intervals can be calculated.
To facilitate the computation of optical flow information between the first set of images, referring to fig. 2, a first prediction network may be provided, which may be a neural network architecture provided based on a convolutional neural network. For example, the first prediction network may include a convolutional layer, a pooling layer, a sampling layer.
The convolutional layers mainly comprise the size of convolutional kernels and the number of input feature graphs, each convolutional layer can comprise a plurality of feature graphs with the same size, the feature values of the same layer adopt a weight sharing mode, and the sizes of the convolutional kernels in each layer are consistent. The convolution layer performs convolution calculation on the input image and extracts the layout characteristics of the input image.
The back of the feature extraction layer of the convolutional layer can be connected with the sampling layer, the sampling layer is used for solving the local average value of the input image and carrying out secondary feature extraction, and the sampling layer is connected with the convolutional layer, so that the neural network model can be guaranteed to have better robustness for the input image.
In order to accelerate the training speed of the first prediction network, a pooling layer is arranged behind the convolutional layer, the pooling layer processes the output result of the convolutional layer in a maximum pooling mode, and invariance characteristics of an input image can be better extracted.
In addition, in order to perform the correlation calculation with respect to the first image set, an image correlation layer may be provided in the first prediction network, image features of the first image set may be extracted by the image correlation layer, and the correlation between the extracted image features of the first image and the second image may be determined by a spatial convolution operation, so as to determine whether to calculate the first optical flow information based on the correlation between the image features in the first image set.
Alternatively, a plurality of deconvolution ReLU layers may be provided in the first prediction network, and for each deconvolution ReLU layer, the output of the layer preceding the deconvolution ReLU layer is input, and at the same time, the low-scale optical flow predicted by the layer preceding the deconvolution ReLU layer and the feature layer in the corresponding module are also input, so as to ensure that when each deconvolution layer is refined, deep abstract information and shallow image information can be obtained, and information lost due to reduction of the feature space scale can be compensated.
When the input video frame is selected, a first image set with a time interval of [ t-N, t + N ] in the target video can be input into the first prediction network, and first optical flow information is obtained, wherein N is a numerical value smaller than t.
S102, based on the first optical flow information, optical flow calculation is carried out on a second image set with a time interval of [ t-M, t + M ] in the target video by utilizing a second prediction network in series with the first prediction network, and second optical flow information is obtained, wherein M is a numerical value smaller than N.
To further improve the accuracy of the optical flow calculation, referring to fig. 2, a second prediction network may also be provided in series with the first prediction network. The second predictive network may be a neural network architecture arranged based on a convolutional neural network. For example, the second prediction network may include a convolutional layer, a pooling layer, a sampling layer.
The convolutional layers mainly comprise the size of convolutional kernels and the number of input feature graphs, each convolutional layer can comprise a plurality of feature graphs with the same size, the feature values of the same layer adopt a weight sharing mode, and the sizes of the convolutional kernels in each layer are consistent. The convolution layer performs convolution calculation on the input image and extracts the layout characteristics of the input image.
The back of the feature extraction layer of the convolutional layer can be connected with the sampling layer, the sampling layer is used for solving the local average value of the input image and carrying out secondary feature extraction, and the sampling layer is connected with the convolutional layer, so that the neural network model can be guaranteed to have better robustness for the input image.
In order to accelerate the training speed of the second prediction network, a pooling layer is arranged behind the convolutional layer, the pooling layer processes the output result of the convolutional layer in a maximum pooling mode, and invariance characteristics of the input image can be better extracted.
In the process of extracting the image features of the second image by using the second prediction network which is serial to the first prediction network, a plurality of convolutional layers may be provided in the second prediction network, and the image features of the images in the second image set may be extracted by the plurality of convolutional layers.
Specifically, the image feature extraction is performed on the images in the second image set based on a plurality of convolutional layers, and the convolutional layers may be arranged in series, and a sampling layer is arranged in the middle of the convolutional layers in series, where the number of the sampling layer is one less than that of the convolutional layers. And finally, taking the final result obtained by calculating the convolution layer and the sampling layer which are sequentially and serially arranged as the image characteristic of the second image set.
S103, adjusting M, N and t values to enable all video frames in the target video to pass through the first prediction network and the second prediction network for optical flow calculation.
By reading the total duration of the target video, M, N and t can be determined based on the total duration, and by adjusting M, N and t, all video frames in the target video can be subjected to optical flow calculation through the first prediction network and the second prediction network, so that the optical flow calculation value of the video frames in the target video is more accurate.
When the values of M, N and t are selected for the first and second prediction networks, the values of t in the first and second prediction networks may be the same or different.
S104, after the optical flow calculation is carried out on the video frames through the first prediction network and the second prediction network, the optical flow value of the target video is determined based on second optical flow information obtained by the second prediction network.
After the second optical flow information is obtained, the optical flow value of the target video may be determined based on the second optical flow information as it is, or the second optical flow information may be processed again to determine the optical flow value of the target video from the processed second optical flow information.
Alternatively, the first prediction network and the second prediction network may be trained based on the second optical flow information, and a loss function may be provided during the training, so that the accuracy of the second optical flow information calculated by the first prediction network and the second prediction network may be determined by the loss function. In this way, through a plurality of times of iterative training calculation, when the accuracy of the second optical flow information meets the requirement, the training of the first prediction network and the second prediction network is completed.
After the training of the first prediction network and the second prediction network is completed, the trained first prediction network and the trained second prediction network can be used for predicting optical flow information of the video frames in the target video frames.
According to the scheme disclosed by the invention, different data processing tasks can be set in different prediction networks by utilizing the plurality of prediction networks, and the accuracy of optical flow prediction is improved by the matched setting of the plurality of prediction networks.
According to a specific implementation manner of the embodiment of the present disclosure, referring to fig. 2 and fig. 3, in the process of determining the optical flow value of the target video based on the second optical flow information obtained by the second prediction network, the method may further include the following steps:
s301, based on the second optical flow information, optical flow calculation is carried out on a third image set with a time interval [ t-L, t + L ] in the target video by utilizing a third prediction network in series with the second prediction network, and third optical flow information is obtained, wherein L is a value smaller than M.
The third prediction network may be a neural network architecture arranged based on a convolutional neural network. For example, the second prediction network may include a convolutional layer, a pooling layer, a sampling layer.
The convolutional layers mainly comprise the size of convolutional kernels and the number of input feature graphs, each convolutional layer can comprise a plurality of feature graphs with the same size, the feature values of the same layer adopt a weight sharing mode, and the sizes of the convolutional kernels in each layer are consistent. The convolution layer performs convolution calculation on the input image and extracts the layout characteristics of the input image.
The back of the feature extraction layer of the convolutional layer can be connected with the sampling layer, the sampling layer is used for solving the local average value of the input image and carrying out secondary feature extraction, and the sampling layer is connected with the convolutional layer, so that the neural network model can be guaranteed to have better robustness for the input image.
In order to accelerate the training speed of the second prediction network, a pooling layer is arranged behind the convolutional layer, the pooling layer processes the output result of the convolutional layer in a maximum pooling mode, and invariance characteristics of the input image can be better extracted.
After the third prediction network is set, optical flow calculation may be performed on a third image set with a time interval of [ t-L, t + L ] in the target video by using a third prediction network in series with the second prediction network based on the second optical flow information, so as to obtain third optical flow information, where L is a value smaller than M.
S302, determining an optical flow value of the target video based on the third optical flow information.
After the third optical flow information is obtained, the optical flow value of the target video may be determined based on the third optical flow information as it is, or the third optical flow information may be processed again to determine the optical flow value of the target video from the processed third optical flow information.
Alternatively, the first prediction network, the second prediction network, and the third prediction network may be trained based on the third optical flow information, and a loss function may be provided during the training, so that the accuracy of the third optical flow information calculated by the first prediction network, the second prediction network, and the third prediction network may be determined by the loss function. In this way, through a plurality of times of iterative training calculations, when the accuracy of the third optical flow information meets the requirement, the training of the first prediction network, the second prediction network and the third prediction network is completed.
After the training of the first prediction network, the second prediction network and the third prediction network is completed, the trained first prediction network, the trained second prediction network and the trained third prediction network can be used for predicting optical flow information of the video frames in the target video frames.
Referring to fig. 4, according to a specific implementation manner of the embodiment of the present disclosure, after the optical flow calculation is performed on each of the video frames through the first prediction network and the second prediction network, and the optical flow value of the target video is determined based on second optical flow information obtained by the second prediction network, the method includes:
s401, setting different loss functions for the first prediction network and the second prediction network;
s402, training the first prediction network and the second prediction network based on the loss function;
and S403, calculating optical flow information of the video to be predicted by using the trained first prediction network and second prediction network.
According to a specific implementation manner of the embodiment of the present disclosure, the inputting a first image set with a time interval of [ t-N, t + N ] in a target video into a first prediction network to obtain first optical flow information may include the following steps:
first, an image association layer is set in the first prediction network.
By setting the image association layer, the association relation between the images in the first image set can be calculated. As an example, the image association layer may be implemented by setting a similarity calculation function.
Next, based on the image association layer, image features of the first set of images are extracted.
By the image association layer, before the similarity calculation, image features between the first image sets may be extracted first, and for example, the image features of the first image sets may be extracted by setting a specific convolution kernel in the convolution layer in a manner of setting the convolution layer in the image association layer.
Next, the correlation of the extracted image features of the first set of images is determined by means of a spatial convolution operation.
The image features of the first image set can be described in a feature matrix manner, and at this time, only the correlation between the feature matrices corresponding to the first image set needs to be calculated, so that the correlation between the images in the first image set can be obtained.
Finally, it is determined whether to compute the first optical flow information based on a correlation of image features of the first set of images.
After the correlation between the first image sets is obtained, normalization processing may be performed on the correlation, and the first optical flow information may be further calculated by determining whether the normalized correlation is greater than a preset value. For example, when the normalized correlation is larger than a preset value, the first optical flow information is calculated, and when the normalized correlation is not larger than the preset value, the first optical flow information is not calculated.
According to a specific implementation manner of the embodiment of the present disclosure, the inputting a first image set with a time interval of [ t-N, t + N ] in a target video into a first prediction network to obtain first optical flow information includes: setting a plurality of deconvolution ReLU layers in the first prediction network; for each deconvolution ReLU layer, the output of the layer before the deconvolution ReLU layer is input, and simultaneously, the predicted low-scale optical flow of the layer before the deconvolution ReLU layer and the feature layer in the corresponding module are also input, so that when each deconvolution layer is refined, deep abstract information and shallow image information can be obtained, and information lost due to reduction of feature space scale is made up.
According to a specific implementation manner of the embodiment of the present disclosure, the performing, based on the first optical flow information, optical flow calculation on a second image set with a time interval [ t-M, t + M ] in a target video by using a second prediction network in series with the first prediction network includes: setting a plurality of convolutional layers in the second prediction network; performing image feature extraction on the second image set based on the plurality of convolutional layers; determining optical flow information for the second set of images based on the extracted features of the second set of images.
According to a specific implementation manner of the embodiment of the present disclosure, the performing image feature extraction on the second image based on the plurality of convolutional layers includes: arranging a plurality of convolution layers in a serial connection mode; arranging sampling layers among the plurality of convolutional layers connected in series, wherein the number of the sampling layers is one less than that of the convolutional layers; and taking the final result obtained by calculating the convolution layer and the sampling layer which are sequentially and serially arranged as the image characteristic of the second image set.
According to a specific implementation manner of the embodiment of the present disclosure, the performing, based on the first optical flow information, optical flow calculation on a second image set with a time interval [ t-M, t + M ] in a target video by using a second prediction network in series with the first prediction network includes: acquiring a first feature matrix and a second feature matrix representing the first optical flow information and the second image set respectively; normalizing the first feature matrix and the second feature matrix to obtain a third feature matrix; and taking the third feature matrix as the input of the second prediction network to predict the optical flow information of the second image set.
According to a specific implementation manner of the embodiment of the present disclosure, the predicting optical flow information of the second image set by using the third feature matrix as an input of the second prediction network includes: calculating the third feature matrix by using a convolution layer, a batch normalization layer and a ReLu layer which are serially arranged in the second prediction network to obtain an optical flow calculation result; and taking the optical flow calculation result as second optical flow information predicted by the second prediction network.
According to a specific implementation manner of the embodiment of the present disclosure, the time interval in the target video is [ t-N, t + N ]]Before the first set of images is input into the first prediction network and the first optical flow information is obtained, the method further comprises: using the formula v (out) v (in)γPerforming image correction on the images in the first image set, wherein v (in) is an image before correction, v (out) is an image after correction, and gamma is a correction coefficient between 0 and 1.
Corresponding to the above method embodiment, referring to fig. 5, the disclosed embodiment further provides an optical flow calculation apparatus 50, including:
a first input module 501, configured to input a first image set with a time interval of [ t-N, t + N ] in a target video into a first prediction network to obtain first optical flow information, where N is a numerical value smaller than t;
a second input module 502, configured to perform optical flow calculation on a second image set with a time interval [ t-M, t + M ] in the target video by using a second prediction network in series with the first prediction network based on the first optical flow information, so as to obtain second optical flow information, where M is a numerical value smaller than N;
an adjusting module 503, configured to perform optical flow calculation on all video frames in the target video through the first prediction network and the second prediction network by adjusting M, N and t values;
an executing module 504, configured to determine an optical flow value of the target video based on second optical flow information obtained by the second prediction network after optical flow calculation is performed on each of the video frames through the first prediction network and the second prediction network.
The apparatus shown in fig. 5 may correspondingly execute the content in the above method embodiment, and details of the part not described in detail in this embodiment refer to the content described in the above method embodiment, which is not described again here.
Referring to fig. 6, an embodiment of the present disclosure also provides an electronic device 60, including:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of optical flow computation of the method embodiments described above.
The disclosed embodiments also provide a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the foregoing method embodiments.
The disclosed embodiments also provide a computer program product comprising a computer program stored on a non-transitory computer-readable storage medium, the computer program comprising program instructions that, when executed by a computer, cause the computer to perform the optical flow calculation method in the aforementioned method embodiments.
Referring now to FIG. 6, a schematic diagram of an electronic device 60 suitable for use in implementing embodiments of the present disclosure is shown. The electronic devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., car navigation terminals), and the like, and fixed terminals such as digital TVs, desktop computers, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, the electronic device 60 may include a processing means (e.g., a central processing unit, a graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 60 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, image sensor, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 60 to communicate with other devices wirelessly or by wire to exchange data. While the figures illustrate an electronic device 60 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring at least two internet protocol addresses; sending a node evaluation request comprising the at least two internet protocol addresses to node evaluation equipment, wherein the node evaluation equipment selects the internet protocol addresses from the at least two internet protocol addresses and returns the internet protocol addresses; receiving an internet protocol address returned by the node evaluation equipment; wherein the obtained internet protocol address indicates an edge node in the content distribution network.
Alternatively, the computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a node evaluation request comprising at least two internet protocol addresses; selecting an internet protocol address from the at least two internet protocol addresses; returning the selected internet protocol address; wherein the received internet protocol address indicates an edge node in the content distribution network.
Computer program code for carrying out operations for aspects of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of a unit does not in some cases constitute a limitation of the unit itself, for example, the first retrieving unit may also be described as a "unit for retrieving at least two internet protocol addresses".
It should be understood that portions of the present disclosure may be implemented in hardware, software, firmware, or a combination thereof.
The above description is only for the specific embodiments of the present disclosure, but the scope of the present disclosure is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present disclosure should be covered within the scope of the present disclosure. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.