Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 shows an exemplary architecture 100 to which embodiments of the method for pushing information or the apparatus for pushing information of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. The network 104 serves as a medium for providing communication links between the terminal devices 101, 102, 103 and the server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
The terminal devices 101, 102, 103 interact with a server 105 via a network 104 to receive or send messages or the like. Various client applications may be installed on the terminal devices 101, 102, 103. Such as browser-type applications, search-type applications, and so forth.
The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having a display screen and supporting voice interaction, including but not limited to smart phones, tablet computers, e-book readers, laptop portable computers, desktop computers, and the like. When the terminal apparatuses 101, 102, 103 are software, they can be installed in the electronic apparatuses listed above. It may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules to provide distributed services) or as a single piece of software or software module. And is not particularly limited herein.
The server 105 may be a server that provides various services, such as a backend server that provides support for client applications installed on the terminal devices 101, 102, 103. The back-end server may obtain voice data of the user corresponding to the terminal device 101, 102, 103 for the keywords in the preset keyword set, and determine information to be pushed according to the voice data. After that, the information to be pushed may be pushed to the terminal devices 101, 102, 103.
It should be noted that the method for pushing information provided by the embodiment of the present disclosure is generally performed by the server 105, and accordingly, the apparatus for pushing information is generally disposed in the server 105.
The server 105 may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method for pushing information in accordance with the present disclosure is shown. The method for pushing the information comprises the following steps:
step 201, acquiring voice data of a first target user corresponding to a keyword in a preset keyword set to obtain a voice data set.
In the present embodiment, the executing body (e.g., the server 105 shown in fig. 1) of the method for pushing information may acquire the voice data set from the terminal device (e.g., the terminal devices 101, 102, 103 shown in fig. 1) corresponding to the first target user.
The first target user may be a user corresponding to an arbitrary terminal device that is communicatively connected to the execution main body. The execution main body can push a preset keyword set to the terminal equipment corresponding to the first target user in advance, and the terminal equipment corresponding to the first target user displays the received keywords in the preset keyword set. And then, the first target user can input the voice data of each keyword in the preset keyword set through the terminal equipment used by the first target user.
The preset keyword set may be a keyword set composed of a plurality of keywords previously specified by a technician. The selection of the keywords can be determined according to the actual application requirements.
Step 202, for voice data in a voice data set, acquiring at least one matched voice data of a keyword corresponding to the voice data; a similarity of the speech data to at least one matching speech data is determined.
In this embodiment, for any keyword, the matching voice data corresponding to the keyword may refer to the voice data of the keyword that meets the preset condition. The preset condition can be set according to the actual application requirement. The matching speech data may be directly specified in advance by a technician.
As an example, for any keyword, some speech data with pronunciation criteria and fluency for the keyword may be selected as matching speech data for the keyword. Since it is considered that if the user is interested in the information related to the keyword, the user is naturally familiar with the pronunciation of the keyword, and the keyword is read out more smoothly in general.
Conversely, if the user is less interested in the information associated with the keyword, the user may be less familiar with the keyword, the pronunciation of the keyword may be more ambitious, and even have a wrong pronunciation.
Moreover, according to different application requirements, different characteristics can be considered to analyze the voice data. For example, in some cases, users in different regions may have different accents. Thus, the voice data may also reflect location information of the corresponding user.
Based on this, by comparing the similarity between the voice data of the first target user for a keyword and the matching voice data of the keyword, the familiarity of the first target user for the keyword and the like can be reflected to a certain extent, so that whether the related information of the keyword is the content that the first target user may be interested in can be further estimated.
Optionally, for any keyword, at least one piece of speech data corresponding to the keyword may be obtained first, so as to obtain a speech data set corresponding to the keyword. Then, voice data meeting the preset conditions can be selected from the voice data set to serve as matched voice data, and technicians can also designate some voice data in the voice data set to serve as matched voice data.
At least one voice data corresponding to the keyword can be acquired from some data platforms, and can also be generated by some audio production software.
Optionally, for any keyword, at least one matching voice data corresponding to the keyword may be determined by:
step one, voice data of at least one keyword corresponding to the matched voice data and input by at least one second target user are obtained and used as candidate voice data, and a candidate voice data set is obtained. The terminal device corresponding to the second target user is pushed with target information, wherein the target information may be determined according to at least one keyword corresponding to the matching voice data.
The second target user may be a user corresponding to an arbitrary terminal device that is communicatively connected to the execution main body. And the second target user is a user who has historically entered voice data that matches at least one keyword to which the voice data corresponds. That is, the historical speech data of at least one keyword corresponding to the matching speech data may be used as the candidate speech data.
Further, the second target user may also be a terminal device corresponding user who has received the target information determined by the execution subject according to the at least one keyword corresponding to the matching voice data.
As an example, for the keyword "a", the voice data of the keyword "a" sent to the execution subject by the terminal device that received the information pushed by the execution subject according to the keyword "a" may be taken as the candidate voice data.
The execution body may store candidate speech data sets corresponding to the respective keywords in advance. Then, the corresponding candidate voice data set can be directly searched according to the keywords.
It should be noted that, for convenience of description, different users are named as a first user and a second user, respectively. It will be understood by those skilled in the art that the first and second are not intended to be limiting.
Step two, acquiring user behavior data of a second target user corresponding to the candidate voice data for the target information for the candidate voice data in the candidate voice data set; and in response to determining that the user behavior data meets the preset conditions, selecting the candidate voice data as matched voice data.
In this step, the database corresponding to the execution subject may store user behavior data of the second target user with respect to the target information. Therefore, the user behavior data of the second target user for the target information can be acquired from the database corresponding to the execution subject.
The preset condition can be set according to the actual application requirement. For example, the preset condition may be that a staying time of the second target user on a page displaying the target information exceeds a preset threshold within a preset time period from when the terminal device corresponding to the second target user receives the target information. When the target information includes at least two pieces of push information, the preset condition may be that the number of pieces of push information clicked by the second target user is greater than a preset threshold.
As an example, continuing to use the keyword "a" as an example, after determining candidate voice data of the keyword "a", the candidate voice data of which the browsing duration of the information pushed according to the keyword "a" by the corresponding user exceeds a preset duration threshold may be selected from the candidate voice data as matching voice data.
In other words, the user's preference degree for the information related to the keyword is estimated according to the behavior data of the user for the information related to the keyword corresponding to the historical voice data of the keyword. Then, the voice data of the user with higher preference degree to the related information of the keyword can be selected as the matched voice data, so that the accuracy of estimating the preference degree of the user to the related information of the keyword according to the similarity of the voice data of the keyword and the matched voice data of the user later is facilitated.
For each keyword in the preset keyword set, at least one corresponding matching voice data can be preset. The matching voice data may be stored locally in the execution subject, or may be stored in a database corresponding to the execution subject. At this time, the matching voice data may be acquired from a database local to the execution subject or corresponding thereto. Of course, the matching voice data corresponding to the keyword may also be acquired from the third-party data platform.
For any voice data, the similarity between the voice data and at least one corresponding matching voice data can be calculated in different manners according to application scenarios.
Optionally, the similarity between the voice data and the matching voice data in the at least one matching voice data may be determined first, so as to obtain a similarity set. Then, the maximum value can be selected from the obtained similarity set to serve as the similarity between the voice data and at least one corresponding matching voice data, and the average value of the similarities in the obtained similarity set can also be determined to serve as the similarity between the voice data and at least one corresponding matching voice data.
Optionally, after the similarity set is obtained, a weight value corresponding to each matching voice data in the at least one matching voice data may be obtained first. And then determining the weighted average of the similarity in the obtained similarity set as the similarity of the voice data and at least one corresponding matching voice data. The weighted value corresponding to each matching voice data may be pre-specified by a technician, or may be determined according to an attribute value of a target attribute (e.g., environmental noise) of each matching voice data.
The similarity of any two voice data can be calculated by adopting some open-source voice similarity calculation methods. For example, a speech similarity matching algorithm based on deep learning is employed to determine the similarity of two speech data.
Step 203, selecting a target number of voice data from the voice data set according to the sequence of the corresponding similarity from large to small.
In this embodiment, the target number may be preset by a technician, or may be determined according to a preset condition. For example, the preset condition may include that the target number is equal to thirty percent of the total number of voice data included in the voice data set, or the like.
And 204, determining information to be pushed according to the selected keywords corresponding to the voice data, and pushing the determined information to be pushed to the terminal equipment corresponding to the first target user.
In this embodiment, after determining the keyword, various different manners may be flexibly adopted to determine the information to be pushed. For example, corresponding push information may be preset for each keyword. At this time, the push information corresponding to the keyword can be directly acquired as the information to be pushed.
For another example, the search may be performed in the target database based on the keyword, and the information to be pushed may be selected from the search result according to the actual requirement. For example, the latest retrieval result may be selected as the information to be pushed. The target database may be a pre-designated database, or may refer to a database meeting a certain condition.
With continued reference to fig. 3, fig. 3 is a schematic diagram 300 of an application scenario of the method for pushing information according to the present embodiment. In the application scenario of fig. 3, when a user initially logs in a client application installed in the terminal device 301 by using the terminal device 301 used by the user, the client application may display keywords in the preset keyword set to the user by means of the terminal device 301. As shown in the figure, there are three keywords, which are "Cosplay" (Costume Play, role Play), "Python" (a computer programming language), and "charulor", respectively.
Thereafter, the user can enter voice data of three keywords through the terminal device 301. As shown in the figure, "Cosplay" corresponds to voice data 302, "Python" corresponds to voice data 303, and "nepheline" corresponds to voice data 304. Then, the client application installed on the terminal device 301 can transmit the voice data 302, the voice data 303, and the voice data 304 to the execution body described above.
Then, the execution main body may respectively obtain at least one matching voice data corresponding to the three voice data, and determine similarity between the three voice data and the corresponding at least one matching voice data.
Specifically, taking the voice data 304 as an example, two matching voice data 306 corresponding to the voice data 304 may be obtained from the database 305 corresponding to the execution subject. Then, the similarity 307 of the two matching voice data 306 with which the voice data 304 matches can be calculated.
Similarly, a similarity 308 of at least one matching voice data with which the voice data 302 matches is calculated, and a similarity 309 of at least one matching voice data with which the voice data 303 matches is calculated.
Then, the obtained three similarities may be sorted, and the maximum value is selected as the similarity 308 according to the sequence of the similarities from large to small. Then, information related to the keyword "Python" corresponding to the similarity 308 may be selected as the information to be pushed 309, and the information to be pushed 309 is pushed to the terminal device 301.
In this application scenario, the keyword "Cosplay" is a two-dimensionally related word. The keyword "Python" is a programming related word. The keyword "Charulo" is a word related to electronic athletics. And analyzing the familiarity or preference degree of the user on the three keywords according to the similarity of the pronunciation of the three keywords by the user and the matched voice data corresponding to the three keywords respectively.
Taking the application scenario as an example, the similarity between the voice data 303 of the keyword "Python" and the matching voice data of the keyword is the largest, so that the user may be considered to be interested in programming-related information. Based on this, programming-related information may be pushed to the user.
The similarity between the voice data of the keywords "cosreplay" and "nepheline" and the matching voice data corresponding to the two keywords is low, and the user can be considered to have little knowledge of the information related to the quadratic element and the electronic competition. Based on this, it is possible to choose to push a small amount of information related to the secondary and the electronic competition or not.
The method provided by the above embodiment of the present disclosure estimates the degree of interest of the user in the related information of the keyword by receiving and analyzing the similarity between the voice data of the keyword by the user and at least one matching voice data corresponding to the keyword. Based on the method, the related information of the keywords which are relatively interested by the user is selected as the push information, so that the situation that the user receives a lot of information which is not interested or can not be clicked is avoided, and unnecessary flow consumption of the user terminal and a server side for pushing the information is further caused.
With further reference to fig. 4, a flow 400 of yet another embodiment of a method for pushing information is shown. The flow 400 of the method for pushing information comprises the following steps:
step 401, acquiring voice data of a first target user corresponding to a keyword in a preset keyword set to obtain a voice data set.
Step 402, for the voice data in the voice data set, obtaining at least one matching voice data of the keyword corresponding to the voice data; a similarity of the speech data to at least one matching speech data is determined.
Step 403, selecting a target number of voice data from the voice data set according to the sequence of the corresponding similarity from large to small.
The specific implementation process of the steps 401, 402, and 403 may refer to the related descriptions of the steps 201, 202, and 203 in the corresponding embodiment of fig. 2, and will not be described herein again.
Step 404, determining information to be pushed according to historical push information corresponding to the users respectively corresponding to at least one matched voice data of the keywords corresponding to the selected voice data, and pushing the determined information to be pushed to the terminal device corresponding to the first target user.
In this embodiment, after selecting the voice data from the voice data set, the execution main body may select push information as information to be pushed from the history push information of the user corresponding to the selected voice data. For example, some pieces of push information with a higher click-through rate may be selected as the information to be pushed. Of course, information related to the history push information may also be searched as the information to be pushed.
In other words, after the current user is likely to be interested in the keywords of the selected voice data based on the analysis, the push information that is interested in the selected voice data can be selected from the historical push information of other users who are also interested in the keywords of the selected voice data, and pushed to the current user.
Optionally, various existing collaborative filtering algorithms may be adopted to determine, according to the historical push information, that the information to be pushed is pushed to the terminal device corresponding to the first target user.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, after the keyword having a relatively high similarity between the corresponding matching voice data and the voice data of the target user is selected by the flow 400 of the method for pushing information in this embodiment, it may be further determined that the information to be pushed is pushed to the terminal device corresponding to the target user according to the historical pushing information of the user corresponding to the matching voice data. Therefore, the determined information to be pushed is further made to conform to the preference of the target user as much as possible, namely, the matching degree of the pushed information and the user preference is increased. And on the basis of pushing the information related to the selected keywords to the user, the coverage of the information capable of being pushed is increased.
With further reference to fig. 5, as an implementation of the methods shown in the above-mentioned figures, the present disclosure provides an embodiment of an apparatus for pushing information, which corresponds to the method embodiment shown in fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 5, the apparatus 500 for pushing information provided by the present embodiment includes an obtaining unit 501, a determining unit 502, a selecting unit 503, and a pushing unit 504. The obtaining unit 501 is configured to obtain voice data of a first target user, which corresponds to a keyword in a preset keyword set, to obtain a voice data set; the determining unit 502 is configured to, for voice data in the voice data set, obtain at least one matching voice data of a keyword corresponding to the voice data; determining similarity of the voice data and at least one matching voice data; the selecting unit 503 is configured to select a target number of voice data from the voice data sets in an order of the corresponding similarity degrees from large to small; the pushing unit 504 is configured to determine information to be pushed according to the keyword corresponding to the selected voice data, and push the determined information to be pushed to the terminal device corresponding to the first target user.
In the present embodiment, in the apparatus 500 for pushing information: the specific processing of the obtaining unit 501, the determining unit 502, the selecting unit 503 and the pushing unit 504 and the technical effects thereof can refer to the related descriptions of step 201, step 202, step 203 and step 204 in the corresponding embodiment of fig. 2, which are not described herein again.
In some optional implementations of the present embodiment, the determining unit 502 is further configured to: determining the similarity between the voice data and the matched voice data in at least one matched voice data respectively to obtain a similarity set; and determining the similarity between the voice data and at least one matching voice data according to the obtained similarity set.
In some optional implementations of the present embodiment, the determining unit 502 is further configured to: acquiring weighted values corresponding to the matched voice data in at least one matched voice data; and determining the weighted average of the similarity in the obtained similarity set as the similarity of the voice data and at least one matching voice data.
In some optional implementations of this embodiment, the at least one matching voice data is determined by: acquiring voice data which is input by at least one second target user and corresponds to at least one keyword matched with the voice data as candidate voice data to obtain a candidate voice data set, wherein the terminal equipment corresponding to the second target user is pushed with target information, and the target information is determined according to the keyword corresponding to the at least one matched voice data; for candidate voice data in the candidate voice data set, acquiring user behavior data of a second target user corresponding to the candidate voice data for the target information; and in response to determining that the user behavior data meets the preset conditions, selecting the candidate voice data as matched voice data.
In some optional implementations of the present embodiment, the pushing unit 504 is further configured to: and determining information to be pushed according to historical pushing information corresponding to the user corresponding to at least one matched voice data of the keywords corresponding to the selected voice data.
According to the device provided by the embodiment of the disclosure, the voice data of the first target user corresponding to the keywords in the preset keyword set is obtained through the obtaining unit, so that a voice data set is obtained; the determining unit acquires at least one matched voice data of the keyword corresponding to the voice data for the voice data in the voice data set; determining similarity of the voice data and at least one matching voice data; the selecting unit selects a target number of voice data from the voice data set according to the sequence of the corresponding similarity from large to small; the pushing unit determines information to be pushed according to the selected keyword corresponding to the voice data and pushes the determined information to be pushed to the terminal device corresponding to the first target user, so that the preference of the user is analyzed according to the similarity of the voice data of the keyword and at least one matched voice data corresponding to the keyword, the information can be pushed to the user according to the analysis result, and unnecessary flow consumption of the user terminal and a server side for pushing the information due to the fact that many pieces of information which are not interested or can not be clicked are pushed to the user is avoided.
Referring now to FIG. 6, a schematic diagram of an electronic device (e.g., the server of FIG. 1) 600 suitable for use in implementing embodiments of the present disclosure is shown. The server shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 6 may represent one device or may represent multiple devices as desired.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of embodiments of the present disclosure.
It should be noted that the computer readable medium described in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed, cause the electronic device to: receiving voice data which are input by a first target user and correspond to keywords in a preset keyword set to obtain a voice data set; for voice data in a voice data set, acquiring at least one matched voice data of a keyword corresponding to the voice data; determining similarity of the voice data and at least one matching voice data; selecting a target number of voice data from the voice data set according to the sequence of the corresponding similarity from large to small; and determining information to be pushed according to the keywords corresponding to the selected voice data, and pushing the determined information to be pushed to the terminal equipment corresponding to the first target user.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor comprises a receiving unit, a determining unit, a selecting unit and a pushing unit. Where the names of these units do not in some cases constitute a limitation on the units themselves, for example, the receiving unit may also be described as "receiving voice data entered by the first target user corresponding to a keyword in a preset keyword set, resulting in a unit of a voice data set".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.