Detailed Description
In order that those skilled in the art will better understand the present disclosure, a technical solution in exemplary embodiments of the present disclosure will be clearly and completely described in the following with reference to the accompanying drawings in exemplary embodiments of the present disclosure.
In some of the flows described in the specification and claims of this disclosure and in the foregoing figures, a number of operations are included that occur in a particular order, but it should be understood that the operations may be performed in other than the order in which they occur or in parallel, that the order of operations such as 101, 102, etc. is merely for distinguishing between the various operations, and that the order of execution does not itself represent any order of execution. In addition, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first" and "second" herein are used to distinguish different messages, devices, modules, etc., and do not represent a sequence, and are not limited to the "first" and the "second" being different types.
Technical solutions in exemplary embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings in exemplary embodiments of the present disclosure, and it is apparent that the described exemplary embodiments are only some embodiments of the present disclosure, not all embodiments. Based on the embodiments in this disclosure, all other embodiments that a person of ordinary skill in the art would obtain without making any inventive effort are within the scope of the disclosure. .
Fig. 1 shows a flowchart of an object information pushing method according to an embodiment of the present disclosure. The method may include steps S101, S102 and S103 and S104.
In step S101, a second text having a similarity to the first text greater than a preset similarity threshold is searched for among texts provided by users other than the current user based on the first text provided for the current user.
In step S102, an object related to the second text, which is actually acquired by the user who provided the second text, is searched for.
In step S103, a first target object is selected from the searched objects related to the second text according to the similarity between the second text and the first text.
In step S104, information of the first target object is pushed to the current user.
In one embodiment of the present disclosure, the first text provided by the current user may be a question posed by the current user at a certain platform. Text provided by users other than the current user may be a question posed by users other than the current user on the platform. In embodiments of the present disclosure, by finding a second text (question of the other user) that is most similar to the first text (question of the current user), the relevant user of the second text is found instead of the answer to which the second text (question) is directed. A first target object associated with the second text (e.g., a product that the associated user has actually purchased) is obtained through the behavior of the associated user. Thus, the information of the first target object related to the second text can be pushed to the current user.
In one embodiment of the present disclosure, there may be a plurality of second texts (questions of other users) most similar to the first texts (questions of the current user), so there may be a plurality of users who provide the second texts, and thus there may be a plurality of first target pair objects related to the second texts. Thus, information of a plurality of first target-pair objects related to a plurality of second texts from different other users can be pushed to the current user.
In one embodiment of the present disclosure, three basic techniques of latent semantic analysis (Latent Semantics Indexing) "," doc2vec ", and also" cosine similarity (Cosine Similarity) ", may be employed to determine the similarity of the second text to the first text.
In one embodiment of the present disclosure, TF-IDF (word Frequency-inverse Wen Pinlv, term Frequency-Inverse Document Frequency) values of each word in text/dictionary dimensions may be calculated by a latent semantic analysis method, and dimension reduction is performed by singular value decomposition, to obtain a word-document matrix after dimension reduction. The most similar text is found by calculating the cosine similarity of the new text to each column of the matrix.
In one embodiment of the present disclosure, in doc2vec technology, an algorithm for generating text may be through a shallow neural network. The generated network contains a matrix with text information, which can be used to calculate the text similarity.
In one embodiment of the present disclosure, cosine similarity (Cosine Similarity) refers to the similarity of two text vectors by calculating the cosine angle of the two text vectors. The larger the value, the higher the similarity. In one embodiment of the present disclosure, the similarity of the second text to the first text is a cosine similarity.
In one embodiment of the present disclosure, determining the similarity of long text may use a latent semantic analysis technique, determining the similarity of short text may use a word2vec technique, and finally solving for the cosine similarity of the text of the current user and the text of other users. For example, the text of the first 1% of other users most similar to the first text may be found as the second text. Text length is distinguished primarily because short text is more context dependent, while long text is more context dependent near text. Here, the long text/short text is determined by a text length analysis model based on accuracy, and the text length analysis model may be trained by related technical means, which is not described herein.
The above is merely a brief description of three techniques of latent semantic analysis (Latent Semantics Indexing) "," doc2vec "and" cosine similarity (Cosine Similarity) ", and those skilled in the art will understand that details of the above three techniques can be obtained from the related art, and their details are not described herein.
In the embodiment of the disclosure, searching a second text with similarity to the first text being greater than a preset similarity threshold value from texts provided by users other than the current user according to the first text provided for the current user; searching for objects related to the second text that were historically actually acquired by the user providing the second text; selecting a first target object from the searched objects related to the second text according to the similarity between the second text and the first text; the information of the first target object is pushed to the current user, the association of the text similarity and the target object can be carried out to replace the object category corresponding to the search text, so that the accurate, multi-category and expert-level target object selection and information pushing can be ensured when the object information is pushed to the text provided by the user, and particularly, when no user information exists, the information of the object expected by the user can still be pushed to the user accurately.
An object information pushing method according to another embodiment of the present disclosure is further described below with reference to fig. 2.
Fig. 2 illustrates a flowchart of an object information pushing method according to another embodiment of the present disclosure. As shown in fig. 2, the embodiment shown in fig. 1 is different in that steps S201 and S202 are further included before step S101.
In step S201, it is determined whether the first text provided by the current user is a short text having a length less than a preset length threshold or a long text having a length greater than or equal to the preset length threshold.
In step S202, according to the determination result of whether the first text is a long text or a short text, the similarity between the first text and the text provided by the user other than the current user is calculated by different similarity calculation methods.
In one embodiment according to the present disclosure, whether the first text is a long text or a short text may be determined by the aforementioned accuracy-based text length analysis model. For example, if the text length is smaller than the preset length threshold, the text is short, and if the text length is greater than or equal to the preset length threshold, the text is long.
In one embodiment according to the present disclosure, when it is determined that the first text is a long text, the similarity of the first text to the text provided by a user other than the current user may be calculated by the aforementioned latent semantic analysis method. When it is determined that the first text is a short text, the similarity of the first text to the text provided by a user other than the current user may be calculated by the doc2vec method described above. By adopting different similarity calculation modes for the long text and the short text, the similarity between the first text and the text provided by the user other than the current user can be calculated more accurately.
In one embodiment according to the present disclosure, step S201 includes: and determining whether the first text provided by the current user is a short text with the length smaller than a preset length threshold or a long text with the length larger than or equal to the preset length threshold according to a preset text length analysis model. As described above, the long text/short text may be determined by a text length analysis model based on accuracy, and the text length analysis model may be trained by related techniques, which will not be described herein. The word2vec algorithm and the matrix generation algorithm of the latent semantic analysis can be automatically trained offline based on accuracy.
An object information pushing method according to still another embodiment of the present disclosure is further described below with reference to fig. 3.
Fig. 3 shows a flowchart of an object information pushing method according to still another embodiment of the present disclosure. The embodiment shown in fig. 3 differs from the embodiment shown in fig. 1 in that step S301 is further included.
In step S301, a plurality of texts provided by the current user are synthesized into a first text.
In one embodiment according to the present disclosure, if a current user provides a plurality of texts, for example, a plurality of questions are posed, each of which may relate to a different target object, the plurality of texts may be synthesized into a first text. This is because the embodiments of the present disclosure do not need to search for a corresponding answer to each question of the current user, but can search for similar second texts (similar questions) presented by other users based on the text similarity, thereby selecting a target object from other actually acquired objects related to the second texts.
As described above, after the plurality of texts provided by the current user are synthesized into the first text, there may be a plurality of second texts (questions of other users) most similar to the first text (questions of the current user), so there may be a plurality of users who provide the second text, and thus there may be a plurality of first target objects related to the second text. Thus, information of a plurality of first target-pair objects related to a plurality of second texts from different other users can be pushed to the current user.
Step S103 in the object information pushing method according to still another embodiment of the present disclosure is further described below with reference to fig. 4.
Fig. 4 shows a flowchart of one example of step S103 in the object information pushing method according to an embodiment of the present disclosure. As shown in fig. 4, step S103 includes steps S401 and S402.
In step S401, the searched objects related to the second text are ranked according to the similarity between the second text and the first text.
In step S402, a first target object is selected from the searched objects related to the second text according to the result of the ranking.
In one embodiment according to the present disclosure, the text of the first few (e.g., first 1%, first 10%, etc.) other users having the highest similarity to the first text may be found as the second text. According to the result of the sorting, a first target object is selected from the searched objects related to the second text, namely, the user of the provided second text specifically acquires which objects related to the second text, and the first target object is selected from the objects. When the information of the first target object selected in the way is pushed to the current user, accurate, multi-category and expert-level object information pushing can be realized.
An object information pushing method according to still another embodiment of the present disclosure is further described below with reference to fig. 5.
Fig. 5 shows a flowchart of an object information pushing method according to still another embodiment of the present disclosure. As shown in fig. 5, the difference from the embodiment shown in fig. 1 is that step S501 is further included, and step S101 includes step S502.
In step S501, a user with high relevance is selected from users other than the current user according to a preset relevance condition.
In step S502, a second text having a similarity with the first text greater than a preset similarity threshold is searched for among texts provided by high-relevancy users according to the first text provided for the current user.
In one embodiment according to the present disclosure, a high-association user may refer to a user having a special relationship with a current user, for example, a friend of the current user on a platform may be a high-association user, and a friend of the current user in reality may also be a high-association user. The relevancy condition may be set in various ways, such as the number of contacts of the current user with other users, the current user being provided with a specific flag for other users, the current user having participated in a certain action with other users, etc. In one embodiment according to the present disclosure, searching for a second text having a similarity with the first text greater than a preset similarity threshold among texts provided by high-relevance users may promote a possibility of searching for a target object meeting current user requirements.
An object information pushing method according to still another embodiment of the present disclosure is further described below with reference to fig. 6.
Fig. 6 shows a flowchart of an object information pushing method according to still another embodiment of the present disclosure. As shown in fig. 6, the embodiment shown in fig. 1 is different in that step S601 is further included, and step S104 includes step S602.
In step S601, an object corresponding to the first text is selected as a second target object from among objects actually acquired in the current user history.
In step S602, information of the first target object and information of the second target object are pushed to the current user.
In one embodiment according to the present disclosure, selecting an object corresponding to the first text from among objects actually acquired historically by the current user as the second target object may more accurately search for a target object satisfying the current object requirement.
An example of an application scenario of the object information pushing method according to an embodiment of the present disclosure is described below with reference to fig. 7.
Fig. 7 is a schematic diagram illustrating an application scenario example of an object information pushing method according to an embodiment of the present disclosure.
As shown in fig. 7, although two user profiles are shown, it should be understood that the two user profiles represent the same user. The current user sends a text dialogue (questions) to the intelligent customer service, and the intelligent customer service judges the text length based on a preset text length analysis model. When the text length is judged to be smaller than the preset threshold value n, determining that the text is a short text, and calculating the problem of other users with highest cosine similarity through a word2vec algorithm. When the text length > =the preset threshold value n is judged, the text is determined to be a long text, and the problem raised by other users with highest cosine similarity is calculated through a latent semantic analysis algorithm. In embodiments of the present disclosure, word2vec algorithms and latent semantic analysis algorithms may be globally text trained.
For example, the corresponding commodity of the 1% of the questions with the highest similarity can be found by finding the questions of 1% of other users with the highest similarity to the questions of the current user, i.e. what commodity was last purchased by the other users who asked for the similar questions. The cosine similarity of the questions (second texts) corresponding to the commodities can be ranked, the first 5 of the second texts with the highest question similarity with the current user can be selected, the commodities corresponding to the first 5 of the second texts are found out, and information of the commodities is pushed to the current user.
In addition, if the current user is an old user, the similar problem (second text) can be found after the first-level user with which the correlation degree reaches the preset condition is screened out. For example, a primary user is a friend of the current user on the platform. The commodity corresponding to the problem (for example, the first 5) with the highest problem similarity with the current user, which is proposed by the first-level user, can be selected, and the information of the commodity is pushed to the current user.
In addition, information for recommending the commodity corresponding to the problem of the current user based on the previous purchase behavior of the current user can be integrated into the information of the commodity selected based on the second text and pushed (displayed) to the current user.
Fig. 8 shows a block diagram of an object information pushing apparatus according to another embodiment of the present disclosure. The apparatus may include a first search module 801, a second search module 802, a selection module 803, and a push module 804.
The first search module 801 is configured to search for a second text having a similarity to the first text greater than a preset similarity threshold among texts provided by users other than the current user based on the first text provided for the current user.
The second search module 802 is configured to search for objects related to the second text that were historically actually obtained by the user providing the second text.
The selection module 803 is configured to select a first target object from the searched objects related to the second text according to the similarity of the second text to the first text.
The pushing module 804 is configured to push information of the first target object to the current user.
The internal functions and structures of the object information pushing system are described above, and in one possible design, the structure of the object information pushing system may be implemented as an object information pushing device, and as shown in fig. 9, the processing device 900 may include a processor 901 and a memory 902.
The memory 902 is configured to store a program supporting the object information pushing system to execute the object information pushing method in any of the above embodiments, and the processor 901 is configured to execute the program stored in the memory 902.
The memory 902 is configured to store one or more computer instructions, wherein the one or more computer instructions are executed by the processor 901 to perform the steps of:
searching a second text with similarity greater than a preset similarity threshold value with the first text in texts provided by users other than the current user according to the first text provided for the current user;
searching for objects related to the second text that were historically obtained by a user providing the second text;
selecting a first target object from the searched objects related to the second text according to the similarity between the second text and the first text;
and pushing the information of the first target object to the current user.
In one embodiment of the present disclosure, the first text provided by the current user may be a question posed by the current user at a certain platform. Text provided by users other than the current user may be a question posed by users other than the current user on the platform. In embodiments of the present disclosure, by finding a second text (question of the other user) that is most similar to the first text (question of the current user), the relevant user of the second text is found instead of the answer to which the second text (question) is directed. A first target object associated with the second text (e.g., a product that the associated user has actually purchased) is obtained through the behavior of the associated user. Thus, the information of the first target object related to the second text can be pushed to the current user.
In one embodiment of the present disclosure, there may be a plurality of second texts (questions of other users) most similar to the first texts (questions of the current user), so there may be a plurality of users who provide the second texts, and thus there may be a plurality of first target pair objects related to the second texts. Thus, information of a plurality of first target-pair objects related to a plurality of second texts from different other users can be pushed to the current user.
In one embodiment of the present disclosure, three basic techniques of latent semantic analysis (Latent Semantics Indexing) "," doc2vec ", and also" cosine similarity (Cosine Similarity) ", may be employed to determine the similarity of the second text to the first text.
In one embodiment of the present disclosure, TF-IDF (word Frequency-inverse Wen Pinlv, term Frequency-Inverse Document Frequency) values of each word in text/dictionary dimensions may be calculated by a latent semantic analysis method, and dimension reduction is performed by singular value decomposition, to obtain a word-document matrix after dimension reduction. The most similar text is found by calculating the cosine similarity of the new text to each column of the matrix.
In one embodiment of the present disclosure, in doc2vec technology, an algorithm for generating text may be through a shallow neural network. The generated network contains a matrix with text information, which can be used to calculate the text similarity.
In one embodiment of the present disclosure, cosine similarity (Cosine Similarity) refers to the similarity of two text vectors by calculating the cosine angle of the two text vectors. The larger the value, the higher the similarity. In one embodiment of the present disclosure, the similarity of the second text to the first text is a cosine similarity.
In one embodiment of the present disclosure, determining the similarity of long text may use a latent semantic analysis technique, determining the similarity of short text may use a word2vec technique, and finally solving for the cosine similarity of the text of the current user and the text of other users. For example, the text of the first 1% of other users most similar to the first text may be found as the second text. Text length is distinguished primarily because short text is more context dependent, while long text is more context dependent near text. Here, the long text/short text is determined by a text length analysis model based on accuracy, and the text length analysis model may be trained by related technical means, which is not described herein.
The above is merely a brief description of three techniques of latent semantic analysis (Latent Semantics Indexing) "," doc2vec "and" cosine similarity (Cosine Similarity) ", and those skilled in the art will understand that details of the above three techniques can be obtained from the related art, and their details are not described herein.
In the embodiment of the disclosure, searching a second text with similarity to the first text being greater than a preset similarity threshold value from texts provided by users other than the current user according to the first text provided for the current user; searching for objects related to the second text that were historically actually acquired by the user providing the second text; selecting a first target object from the searched objects related to the second text according to the similarity between the second text and the first text; the information of the first target object is pushed to the current user, the association of the text similarity and the target object can be carried out to replace the object category corresponding to the search text, so that the accurate, multi-category and expert-level target object selection and information pushing can be ensured when the object information is pushed to the text provided by the user, and particularly, when no user information exists, the information of the object expected by the user can still be pushed to the user accurately.
In one embodiment according to the present disclosure, before searching for a second text having a similarity to the first text greater than a preset similarity threshold among texts provided by users other than the current user based on the first text provided for the current user, the one or more computer instructions are further executable by the processor 901 to: determining whether a first text provided by a current user is a short text with a length smaller than a preset length threshold or a long text with a length larger than or equal to the preset length threshold.
And calculating the similarity between the first text and the text provided by the user other than the current user through different similarity calculation modes according to the determination result of whether the first text is a long text or a short text.
In one embodiment according to the present disclosure, whether the first text is a long text or a short text may be determined by the aforementioned accuracy-based text length analysis model. For example, if the text length is smaller than the preset length threshold, the text is short, and if the text length is greater than or equal to the preset length threshold, the text is long.
In one embodiment according to the present disclosure, when it is determined that the first text is a long text, the similarity of the first text to the text provided by a user other than the current user may be calculated by the aforementioned latent semantic analysis method. When it is determined that the first text is a short text, the similarity of the first text to the text provided by a user other than the current user may be calculated by the doc2vec method described above. By adopting different similarity calculation modes for the long text and the short text, the similarity between the first text and the text provided by the user other than the current user can be calculated more accurately.
In one embodiment according to the present disclosure, determining whether a first text provided by a current user is a short text having a length less than a preset length threshold or a long text having a length greater than or equal to the preset length threshold includes: and determining whether the first text provided by the current user is a short text with the length smaller than a preset length threshold or a long text with the length larger than or equal to the preset length threshold according to a preset text length analysis model. As described above, the long text/short text may be determined by a text length analysis model based on accuracy, and the text length analysis model may be trained by related techniques, which will not be described herein. The word2vec algorithm and the matrix generation algorithm of the latent semantic analysis can be automatically trained offline based on accuracy.
In one embodiment according to the present disclosure, the one or more computer instructions are further executable by the processor 901 to perform the steps of: and synthesizing a plurality of texts provided by the current user into a first text.
In one embodiment according to the present disclosure, if a current user provides a plurality of texts, for example, a plurality of questions are posed, each of which may relate to a different target object, the plurality of texts may be synthesized into a first text. This is because the embodiments of the present disclosure do not need to search for a corresponding answer to each question of the current user, but can search for similar second texts (similar questions) presented by other users based on the text similarity, thereby selecting a target object from other actually acquired objects related to the second texts.
As described above, after the plurality of texts provided by the current user are synthesized into the first text, there may be a plurality of second texts (questions of other users) most similar to the first text (questions of the current user), so there may be a plurality of users who provide the second text, and thus there may be a plurality of first target objects related to the second text. Thus, information of a plurality of first target-pair objects related to a plurality of second texts from different other users can be pushed to the current user.
In one embodiment according to the present disclosure, selecting a first target object from the searched objects related to the second text according to the similarity of the second text to the first text, includes: sorting the searched objects related to the second text according to the similarity between the second text and the first text; and selecting a first target object from the searched objects related to the second text according to the sorting result.
In one embodiment according to the present disclosure, the text of the first few (e.g., first 1%, first 10%, etc.) other users having the highest similarity to the first text may be found as the second text. According to the result of the sorting, a first target object is selected from the searched objects related to the second text, namely, the user of the provided second text specifically acquires which objects related to the second text, and the first target object is selected from the objects. When the information of the first target object selected in the way is pushed to the current user, accurate, multi-category and expert-level object information pushing can be realized.
In one embodiment according to the present disclosure, the one or more computer instructions are further executable by the processor 901 to perform the steps of: screening out users with high relevance from users except the current user according to preset relevance conditions; wherein searching for a second text having a similarity to the first text greater than a preset similarity threshold from among texts provided by users other than the current user based on the first text provided for the current user, comprises: searching a second text with the similarity with the first text being larger than a preset similarity threshold value in the texts provided by the high-relevance users according to the first text provided for the current user.
In one embodiment according to the present disclosure, a high-association user may refer to a user having a special relationship with a current user, for example, a friend of the current user on a platform may be a high-association user, and a friend of the current user in reality may also be a high-association user. The relevancy condition may be set in various ways, such as the number of contacts of the current user with other users, the current user being provided with a specific flag for other users, the current user having participated in a certain action with other users, etc. In one embodiment according to the present disclosure, searching for a second text having a similarity with the first text greater than a preset similarity threshold among texts provided by high-relevance users may promote a possibility of searching for a target object meeting current user requirements.
In one embodiment according to the present disclosure, the one or more computer instructions are further executable by the processor 901 to perform the steps of: selecting an object corresponding to the first text from the objects actually acquired in the current user history as a second target object; the pushing the information of the first target object to the current user comprises the following steps: and pushing the information of the first target object and the information of the second target object to the current user.
In one embodiment according to the present disclosure, selecting an object corresponding to the first text from among objects actually acquired historically by the current user as the second target object may more accurately search for a target object satisfying the current object requirement.
The processor 901 is configured to perform all or part of the steps of the foregoing method steps.
The structure of the object information pushing device can also comprise a communication interface used for the object information pushing device to communicate with other devices or communication networks.
The exemplary embodiments of the present disclosure also provide a computer storage medium for storing computer software instructions for use by the object information pushing system, which contains a program for executing the object information pushing method according to any of the above embodiments.
Fig. 10 is a schematic diagram of a computer system suitable for implementing an object information pushing method according to an embodiment of the present disclosure.
As shown in fig. 10, the computer system 1000 includes a Central Processing Unit (CPU) 1001 that can execute various processes in the embodiment shown in fig. 1 described above in accordance with a program stored in a Read Only Memory (ROM) 1002 or a program loaded from a storage section 1008 into a Random Access Memory (RAM) 1003. In the RAM1003, various programs and data required for the operation of the system 1000 are also stored. The CPU1001, ROM1002, and RAM1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
The following components are connected to the I/O interface 1005: an input section 1006 including a keyboard, a mouse, and the like; an output portion 1007 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), etc., and a speaker, etc.; a storage portion 1008 including a hard disk or the like; and a communication section 1009 including a network interface card such as a LAN card, a modem, or the like. The communication section 1009 performs communication processing via a network such as the internet. The drive 1010 is also connected to the I/O interface 1005 as needed. A removable medium 1011, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is installed as needed in the drive 1010, so that a computer program read out therefrom is installed as needed in the storage section 1008.
In particular, the method described above with reference to fig. 1 may be implemented as a computer software program according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a medium readable thereby, the computer program comprising program code for performing the data processing method of fig. 1. In such an embodiment, the computer program can be downloaded and installed from a network via the communication portion 1009, and/or installed from the removable medium 1011.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present disclosure may be implemented by software, or may be implemented by hardware. The units or modules described may also be provided in a processor, the names of which in some cases do not constitute a limitation of the unit or module itself.
As another aspect, the present disclosure also provides a computer-readable storage medium, which may be a computer-readable storage medium included in the system described in the above embodiment; or may be a computer-readable storage medium, alone, that is not assembled into a device. The computer-readable storage medium stores one or more programs for use by one or more processors in performing the methods described in the present disclosure.
The foregoing description is only of the preferred embodiments of the present disclosure and description of the principles of the technology being employed. It will be appreciated by those skilled in the art that the scope of the invention referred to in this disclosure is not limited to the specific combination of features described above, but encompasses other embodiments in which any combination of features described above or their equivalents is contemplated without departing from the inventive concepts described. Such as those described above, are mutually substituted with the technical features having similar functions disclosed in the present disclosure (but not limited thereto).