CN112199587A - Searching method, searching device, electronic equipment and storage medium - Google Patents

Searching method, searching device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112199587A
CN112199587A CN202011056692.3A CN202011056692A CN112199587A CN 112199587 A CN112199587 A CN 112199587A CN 202011056692 A CN202011056692 A CN 202011056692A CN 112199587 A CN112199587 A CN 112199587A
Authority
CN
China
Prior art keywords
search result
result
target
search
target feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011056692.3A
Other languages
Chinese (zh)
Inventor
刘根华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Pateo Electronic Equipment Manufacturing Co Ltd
Original Assignee
Shanghai Pateo Electronic Equipment Manufacturing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Pateo Electronic Equipment Manufacturing Co Ltd filed Critical Shanghai Pateo Electronic Equipment Manufacturing Co Ltd
Priority to CN202011056692.3A priority Critical patent/CN112199587A/en
Publication of CN112199587A publication Critical patent/CN112199587A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a searching method, a searching device, electronic equipment and a storage medium, wherein the searching method comprises the following steps: acquiring voice information of a user; acquiring a first search result of the local terminal and a second search result of the mobile terminal according to the voice information; the mobile terminal is in communication connection with the electronic equipment; and sequencing the first search result or the second search result, and determining a target search result according to the sequencing result. According to the method and the device, the searched resources can be more comprehensive, so that the accuracy of the final search result is improved, and the user requirements can be met.

Description

Searching method, searching device, electronic equipment and storage medium
Technical Field
The present application relates to the field of terminal device technologies, and in particular, to a search method, an apparatus, an electronic device, and a storage medium.
Background
With the increase of economic level, the nationwide and regional automobile holding amount is steadily increased, and the automobile becomes one of the preferable modes for people to go out. On the other hand, the development of electronic technology, automation technology and artificial intelligence technology greatly improves the performances of automobiles in various aspects, such as: the automatic driving can effectively reduce the burden of a driver, and more personalized service can be provided for a user based on voice recognition. Currently, electronic devices can provide search services through speech recognition to meet the resource acquisition requirements of users, for example: music, electronic books and the like are searched, but the resource library accessed by the electronic equipment is very limited, so that the resources which can be searched by the electronic equipment are not comprehensive, and the final search result is not accurate.
Disclosure of Invention
In view of the above problems, the present application provides a searching method, an apparatus, an electronic device, and a storage medium, which are beneficial to improving the comprehensiveness of the electronic device in searching resources, so as to improve the accuracy of the final search result.
In order to achieve the above object, a first aspect of the embodiments of the present application provides a search method applied to an electronic device, where the method includes:
acquiring voice information of a user;
acquiring a first search result of the local terminal and a second search result of the mobile terminal according to the voice information; the mobile terminal is in communication connection with the electronic equipment;
and sequencing the first search result or the second search result, and determining a target search result according to the sequencing result.
In an implementation manner of the first aspect, the obtaining a first search result of the local terminal and a second search result of the mobile terminal according to the voice information includes:
carrying out voice recognition on the voice information to obtain a voice recognition result;
searching according to the voice recognition result to obtain the first search result; and the number of the first and second groups,
sending the voice recognition result to the mobile terminal based on the communication connection, so that the mobile terminal carries out searching according to the voice recognition result to obtain a second search result;
and receiving the second search result returned by the mobile terminal.
In another implementation manner of the first aspect, after determining the target search result, the method further includes:
executing the target search result at the local end under the condition that the target search result belongs to the first search result;
and executing the target search result on the mobile terminal based on the communication connection under the condition that the target search result belongs to the second search result.
In another implementation manner of the first aspect, the ranking the first search result or the second search result, and determining a target search result according to a ranking result includes:
under the conditions that the first search result is empty and the second search result is not empty, obtaining a first satisfaction degree of each search result in the second search result, sequencing each search result in the second search result according to the first satisfaction degree, and determining the target search result from the sequenced second search result; wherein the first satisfaction degree is used for indicating the matching degree of each search result in the second search results and the voice recognition result;
under the condition that the first search result is not empty and the second search result is empty, acquiring a second satisfaction degree of each search result in the first search result; sequencing each search result in the first search results according to the second satisfaction degree, and determining the target search result from the sequenced first search results; wherein the second satisfaction degree is used for indicating the matching degree of each search result in the first search results and the voice recognition result.
In another embodiment of the first aspect, the method further comprises:
under the condition that the first search result and the second search result are not empty, combining the first search result and the second search result to obtain a third search result;
acquiring a third satisfaction degree of each search result in the third search result;
sequencing each search result in the third search results according to the third satisfaction degree to determine the target search result; wherein the third satisfaction degree is used for indicating the matching degree of each search result in the third search results and the voice recognition result.
In another implementation manner of the first aspect, the obtaining the first satisfaction degree of each search result in the second search result includes:
acquiring the similarity between each search result in the second search results and the voice recognition result;
dividing the second search result into a first result set, a second result set and a third result set according to the similarity and a preset value;
and calculating the first satisfaction degree of each search result in the second search result by adopting a preset formula based on the first result set, the second result set and the third result set.
In another embodiment of the first aspect, the voice information comprises a human voice and noise; the voice recognition of the voice information to obtain a voice recognition result includes:
inputting the voice information into a voice activity detection model for voice extraction to obtain N sections of voice information; n is an integer greater than 1;
acquiring the characteristic sequences of the N segments of voice information to obtain N segments of characteristic sequences;
generating a target characteristic sequence according to the N sections of characteristic sequences;
respectively intercepting P sections of target feature subsequences and R target features from the target feature sequences; p and R are integers greater than 1
Calculating the similarity between each target feature in the P-segment target feature subsequence and the R target features to obtain R similarity of each target feature;
acquiring an updated feature subsequence of each target feature subsequence in the P target feature subsequences based on the R similarity of each target feature;
updating the target characteristic sequence by adopting the updated characteristic sub-sequence to obtain an updated target characteristic sequence;
matching the updated target feature sequence with feature sequences corresponding to a plurality of texts stored in a text database to obtain the voice recognition result; the text database is used for storing the plurality of texts and the characteristic sequence corresponding to each text.
In another implementation manner of the first aspect, the obtaining an updated feature subsequence of each target feature subsequence in the P target feature subsequences based on the R similarities of each target feature includes:
normalizing the R similarity of each target feature to obtain R weights of each target feature;
calculating R output features of each target feature based on each target feature and R weights of each target feature;
and summing the R output features of each target feature to obtain the updated feature of each target feature, and forming the updated feature subsequence by the updated feature of each target feature.
A second aspect of the embodiments of the present application provides a search apparatus, including:
the voice acquisition module is used for acquiring voice information of a user;
the search result acquisition module is used for acquiring a first search result of the local terminal and a second search result of the mobile terminal according to the voice information; the mobile terminal is in communication connection with the electronic equipment;
and the search result selection module is used for sequencing the first search result or the second search result and determining a target search result according to the sequencing result.
A third aspect of embodiments of the present application provides an electronic device, which includes an input device, an output device, and a processor, and is adapted to implement one or more instructions; and a computer storage medium storing one or more instructions adapted to be loaded by the processor and to perform the steps of:
acquiring voice information of a user;
acquiring a first search result of the local terminal and a second search result of the mobile terminal according to the voice information; the mobile terminal is in communication connection with the electronic equipment;
and sequencing the first search result or the second search result, and determining a target search result according to the sequencing result.
A fourth aspect of embodiments of the present application provides a computer storage medium having one or more instructions stored thereon, the one or more instructions adapted to be loaded by a processor and to perform the following steps:
acquiring voice information of a user;
acquiring a first search result of the local terminal and a second search result of the mobile terminal according to the voice information; the mobile terminal is in communication connection with the electronic equipment;
and sequencing the first search result or the second search result, and determining a target search result according to the sequencing result.
The above scheme of the present application includes at least the following beneficial effects: compared with the prior art, the electronic equipment in the embodiment of the application acquires the voice information of the user; acquiring a first search result of the local terminal and a second search result of the mobile terminal according to the voice information; the mobile terminal is in communication connection with the electronic equipment; and sequencing the first search result or the second search result, and determining a target search result according to the sequencing result. Therefore, the electronic equipment realizes the collaborative search with the mobile terminal, the search result is not limited to the first search result of the electronic equipment and the second search result of the mobile terminal, but the search results of the electronic equipment and the second search result of the mobile terminal are combined, and the searched resources are more comprehensive, so that the accuracy of the final search result is improved, and the user requirements can be met better.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of an application environment provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a searching method according to an embodiment of the present application;
fig. 3A is an exemplary diagram of a search method provided in an embodiment of the present application;
FIG. 3B is an exemplary diagram of another search method provided by an embodiment of the present application;
fig. 4A is an exemplary diagram of voice information detection provided in an embodiment of the present application;
fig. 4B is an exemplary diagram of an audio frame in N segments of voice information according to an embodiment of the present application;
fig. 5 is a schematic flowchart of another search method provided in the embodiment of the present application;
fig. 6 is a schematic structural diagram of a search apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of another search apparatus provided in the embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "comprising" and "having," and any variations thereof, as appearing in the specification, claims and drawings of this application, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. Furthermore, the terms "first," "second," and "third," etc. are used to distinguish between different objects and are not used to describe a particular order.
The embodiment of the present application provides a search method, which can be implemented based on an application environment shown in fig. 1, where as shown in fig. 1, the application environment includes an electronic device and at least one mobile terminal that establishes a communication connection with the electronic device, where the electronic device includes at least a communication module, a processor, a display module, and a voice acquisition module, the mobile terminal also includes the communication module, the processor, the display module, and the voice acquisition module, and both the communication modules of the electronic device and the mobile terminal are provided with a data protocol interface, and establish a communication connection based on the data protocol interface.
Specifically, when a user in a preset range sends out voice information for searching, the voice acquisition modules of the electronic device and the mobile terminal can acquire the voice information, processors of the electronic device and the mobile terminal can perform voice recognition respectively based on the voice information transmitted by the voice acquisition modules to obtain a voice recognition result, and the processor of the electronic device searches locally or a third party based on the own voice recognition result to obtain a search result; the processor of the mobile terminal searches in a local area or a third party based on the voice recognition result of the processor to obtain a search result, the search result is sent to the electronic equipment, the processor of the electronic equipment carries out sequencing based on the search result of the processor and the search result of the mobile terminal, and finally a better search result is selected to be executed at the local end or the mobile terminal. In some scenarios, the electronic device may also acquire the voice information of the user, perform voice recognition through the processor, and send the voice recognition result to the mobile terminal through the communication module, so that the mobile terminal does not have a voice recognition process. The electronic equipment can acquire the search result of the electronic equipment and the search result of at least one mobile terminal, the search is more comprehensive, and the finally selected search result meets the requirements of the user.
Based on the application environment shown in fig. 1, the following describes in detail a search method provided in an embodiment of the present application with reference to other drawings.
Referring to fig. 2, fig. 2 is a schematic flowchart of a searching method provided in an embodiment of the present application, where the method is applied to an electronic device, and the electronic device may be a vehicle-mounted terminal, a smart speaker, a wearable device, and the like, as shown in fig. 2, including steps S21-S23:
s21, acquiring the voice information of the user;
s22, acquiring a first search result of the local terminal and a second search result of the mobile terminal according to the voice information; the mobile terminal is in communication connection with the electronic equipment;
in this embodiment, the user refers to a person within a certain range of the electronic device, and the voice information refers to a voice that is sent by the user and used for searching for a specific resource, for example: the method comprises the steps of searching voices of music, movies and electronic books, wherein a first search result is a search result of the electronic equipment side, and a second search result is a search result of at least one mobile terminal. Optionally, as shown in fig. 3A, after acquiring the voice information, the electronic device may directly perform voice recognition on the voice information to obtain a voice recognition result, perform a search according to the voice recognition result to obtain a first search result, while acquiring the voice recognition result, the electronic device sends a search request to the mobile terminal based on the communication connection with the mobile terminal, where the search request carries the voice recognition result, and after receiving the search request, the mobile terminal parses the search request to obtain a voice recognition result, then performs a search according to the voice recognition result to obtain a second search result, and then returns the second search result to the electronic device based on the communication connection, and the electronic device performs sorting and selecting operations based on the first search result and the second search result. Optionally, as shown in fig. 3B, the electronic device and the mobile terminal both have a voice recognition function and are both in an awake state, after obtaining voice information, the electronic device and the mobile terminal respectively perform voice recognition, the electronic device searches based on a voice recognition result of the electronic device to obtain a first search result, the mobile terminal searches based on a voice recognition result of the mobile terminal to obtain a second search result, the mobile terminal sends the second search result to the electronic device based on a communication connection, and the electronic device performs sorting and selecting operations based on the first search result and the second search result. The search addresses of the electronic device and the mobile terminal are different, for example: the user opens three songs that want to search for Wangffei, the electronic device can search in the local area, the song library E and the website F, the mobile terminal can search in the local area, the song library E, the website H and the website G, and the search address of the electronic device and the search address of the mobile terminal can have intersection or not. The communication connection between the electronic device and the mobile terminal includes, but is not limited to, bluetooth, Wi-Fi, and USB (Universal Serial Bus) connection.
S23, sorting the first search result or the second search result, and determining a target search result according to the sorting result.
In a specific embodiment of the present application, after acquiring a first search result and a second search result, an electronic device ranks the first search result or the second search result, and selects a target search result according to the ranking result, where the target search result refers to an optimal search result that can better meet user requirements and can be selected by the electronic device, for example: the speech recognition result is "play a certain star (song title)", and the target search result may be the original sound of a certain singing.
In a possible implementation manner, under the condition that the first search result is empty and the second search result is not empty, obtaining a first satisfaction degree of each search result in the second search result, sorting each search result in the second search result according to the first satisfaction degree, and determining the target search result from the sorted second search result; wherein the first satisfaction degree is used for indicating the matching degree of each search result in the second search results and the voice recognition result.
Wherein the first search result being null indicates that the electronic device has not searched for the related resource, the second search result not being null indicates that the mobile terminal has searched for the related resource,at this time, the target search result can only be selected from the second search results, specifically, the electronic device calculates the similarity between each search result in the second search results and the speech recognition result obtained by the electronic device, and divides the second search results into the first result set theta according to the similarity and the preset value1Second result set theta2And a third result set theta3For example: dividing the search results with the similarity larger than a preset value T into the same set to obtain a first result set theta1(ii) a Dividing the search results with the similarity equal to a preset value T into the same set to obtain a second result set theta2(ii) a Dividing the search results with the similarity smaller than a preset value T into the same set to obtain a third result set theta3. Based on the first result set theta1Second result set theta2And a third result set theta3And adopting a preset formula:
Figure BDA0002710204700000081
calculating to obtain a first satisfaction degree of each search result in the second search results, wherein S (P, Q) represents the first satisfaction degree of each search result in the second search results, alpha and beta represent preset constants, alpha usually takes 2, beta usually takes 1, P represents each search result in the second search results, Q represents a voice recognition result, and thetaPi) Representing each search result in the second search result and the divided ith result set thetaiThe degree of closeness (i.e., degree of match), θ, of each search result in the searchQi) Representing the speech recognition result and the divided ith result set thetaiThe fitness i of each search result in the search list is 1,2 and 3. And the electronic equipment sorts each search result in the second search results according to the first satisfaction degree from high to low, and takes the search result with the highest first satisfaction degree as the target search result. It should be noted that the preset value here may be a specific value, or may be a preset interval, the similarity is used for comparing with the preset value to divide the second search result into a plurality of result sets, and the calculation of the first satisfaction needs to be completed based on the result sets divided by the similarity.
In a possible implementation manner, under the condition that the first search result is not empty and the second search result is empty, obtaining a second satisfaction degree of each search result in the first search result; sequencing each search result in the first search results according to the second satisfaction degree, and determining the target search result from the sequenced first search results; wherein the second satisfaction degree is used for indicating the matching degree of each search result in the first search results and the voice recognition result.
Similarly to the above first search result being empty, if the second search result is empty and the first search result is not empty, only the target search result can be selected from the first search result, and similarly, the second satisfaction of each search result in the first search result is calculated by using the above preset formula, the search results with the highest second satisfaction in the first search result are ranked according to the second satisfaction, and the search result with the highest second satisfaction in the first search result is taken as the target search result.
In a possible implementation, in a case that neither the first search result nor the second search result is empty, the first search result is merged with the second search result, for example: only one search result which is completely reserved and identical with the different search results in the first search result and the second search result is reserved to obtain a third search result; acquiring a third satisfaction degree of each search result in the third search result; ranking each search result in the third search results according to the third satisfaction to determine the target search result, for example: taking the search result with the highest third satisfaction degree in the third search results as a target search result; wherein the third satisfaction degree is used for indicating the matching degree of each search result in the third search results and the voice recognition result. In the embodiment, when the search results of the local terminal and the mobile terminal are not empty, the first search result and the second search result are combined, and the target search result is selected from the combined third search result, so that the search is more comprehensive, and the searched resources are richer.
In one possible embodiment, the method further comprises:
and under the condition that the first search result and the second search result are not empty, determining the target search result from the first search result or the second search according to the preset priorities of the electronic equipment and the mobile terminal. Specifically, the second satisfaction degree of each search result in the first search result is obtained under the condition that the priority of the electronic device is higher than that of the mobile terminal; and sequencing each search result in the first search results according to the second satisfaction degree, and determining the target search result from the sequenced first search results. And under the condition that the priority of the mobile terminal is higher than that of the electronic equipment, acquiring the first satisfaction degree of each search result in the second search results, sequencing each search result in the second search results according to the first satisfaction degree, and determining the target search result from the sequenced second search results. In the embodiment, the target search result is selected from the first search result of the local terminal or the second search result of the mobile terminal according to the preset priority, so that the search result can be selected according to the preference of the user to be executed, and the user experience is favorably improved.
In one possible implementation, after determining the target search result, the method further includes:
executing the target search result at the local end under the condition that the target search result belongs to the first search result;
and executing the target search result on the mobile terminal based on the communication connection under the condition that the target search result belongs to the second search result.
In the embodiment of the present application, after determining the target search result, the electronic device may select which end to execute according to the first search result or the second search result to which the target search result belongs, for example: and finally, determining that the target search result is the original sound of the red bean of Wangfei, if the song is the search result of the local terminal, directly executing the song at the local terminal, and if the song is the search result of the mobile terminal, sending a play request to the mobile terminal based on communication connection to play the song at the mobile terminal.
In one possible embodiment, the voice information includes a human voice and noise; the voice recognition of the voice information to obtain a voice recognition result includes:
inputting the voice information into a voice activity detection model for voice extraction to obtain N sections of voice information; n is an integer greater than 1;
acquiring the characteristic sequences of the N segments of voice information to obtain N segments of characteristic sequences;
generating a target characteristic sequence according to the N sections of characteristic sequences;
respectively intercepting P sections of target feature subsequences and R target features from the target feature sequences; p and R are integers greater than 1;
calculating the similarity between each target feature in the P-segment target feature subsequence and the R target features to obtain R similarity of each target feature;
acquiring an updated feature subsequence of each target feature subsequence in the P target feature subsequences based on the R similarity of each target feature;
updating the target characteristic sequence by adopting the updated characteristic sub-sequence to obtain an updated target characteristic sequence;
matching the updated target feature sequence with feature sequences corresponding to a plurality of texts stored in a text database to obtain the voice recognition result; the text database is used for storing the plurality of texts and the characteristic sequence corresponding to each text.
In the embodiment of the present application, for obtaining the Voice information shown in fig. 4A, a Voice Activity Detection model (VAD) is used to detect the Voice information, so as to extract N segments of Voice information (3 segments in the figure, respectively Voice 2.1, Voice 2.2, and Voice 2.3), for the N segments of Voice information, the Voice information is input into a pre-trained neural network model to perform feature extraction, such as a bidirectional long-short term memory network, to obtain a feature sequence of the N segments of Voice information, as shown in fig. 4B, if the Voice 2.1 includes 50 audio frames, the corresponding feature sequence is (A1, a2, A3, …, a50), if the Voice 2.2 includes 70 audio frames, the corresponding feature sequence is (a51, a52, …, a120), if the Voice 2.3 includes 60 audio frames, the corresponding feature sequence is (a121, a122, a123, a …, a 36180), and the N segments of Voice information are spliced feature sequences (A1, a2, A3, …, a179, a180), P segments of target feature subsequences and R target features are extracted from the target feature sequence, and since each two adjacent feature subsequences in the N segments of feature subsequences may have the same or similar target features, several target features in the next segment are taken during the extraction, for example: the first segment of the P-segment target feature subsequence is (A1, a 2., (a 50, a51, a52, a53), the second segment, being an intermediate segment, brings more target features of the first segment of the N-segment feature sequence and also brings more features of the last segment of the N-segment feature sequence, the second segment of the P-segment target feature subsequence is (a48, a49, a50, …, a121, a122, a123), thus, the last segment of the P-segment target feature subsequence is (a118, a119, a120, …, a180), as to the specific introduction of several target features, can be set according to the length of each segment of the human voice information, where the R target features are (a48, a49, a50, a51, a52, a53, a118, a119, a120, a121, a122, a123), the similarity between each target feature in the P-segment and the target feature in the P-segment target feature subsequence is calculated by a similarity algorithm, then, each target feature in the P segments of target feature subsequences has R similarity, processing is performed based on the R similarity to obtain an updated feature subsequence of each segment of target feature subsequence in the P segments of target feature subsequences, the P segments of updated feature subsequences are spliced to obtain an updated target feature sequence, the updated target feature sequence is matched with feature sequences corresponding to a plurality of texts stored in a text database to obtain a text with the highest matching degree, and the text is the final speech recognition result. In the embodiment, the target characteristic sequence formed by splicing the N sections of characteristic sequences may have unsmooth conditions at the spliced part, and the influence caused by the unsmooth spliced part can be eliminated by adopting the method, so that the voice recognition result is more accurate.
In a possible implementation manner, the obtaining an updated feature subsequence of each target feature subsequence in the P target feature subsequences based on the R similarities of each target feature includes:
normalizing the R similarity of each target feature to obtain R weights of each target feature;
calculating R output features of each target feature based on each target feature and R weights of each target feature;
and summing the R output features of each target feature to obtain the updated feature of each target feature, and forming the updated feature subsequence by the updated feature of each target feature.
In the embodiment of the application, the existing normalization function is adopted to perform normalization processing on the R similarity of each target feature to obtain R weights of each target feature, each target feature is multiplied by the corresponding R weights to obtain R output features corresponding to each target feature, the R output features are accumulated to obtain the updated feature of each target feature, the updated feature is used for forming an updated feature subsequence, and the updated feature subsequence can be spliced into an updated target feature sequence. In the embodiment, the target feature subsequence is updated in a normalization and weighted summation mode, and the finally obtained updated target feature sequence is more consistent.
It can be seen that, in the embodiment of the application, the electronic device obtains the voice information of the user; acquiring a first search result of the local terminal and a second search result of the mobile terminal according to the voice information; the mobile terminal is in communication connection with the electronic equipment; and sequencing the first search result or the second search result, and determining a target search result according to the sequencing result. Therefore, the electronic equipment realizes the collaborative search with the mobile terminal, the search result is not limited to the first search result of the electronic equipment and the second search result of the mobile terminal, but the search results of the electronic equipment and the second search result of the mobile terminal are combined, and the searched resources are more comprehensive, so that the accuracy of the final search result is improved, and the user requirements can be met better.
Referring to fig. 5, fig. 5 is a flowchart illustrating another searching method provided by the embodiment of the present application, which is applied to an electronic device, where the electronic device establishes a communication connection with at least one mobile terminal, as shown in fig. 5, including steps S51-S56:
s51, acquiring the voice information of the user;
s52, carrying out voice recognition on the voice information to obtain a voice recognition result;
s53, searching according to the voice recognition result to obtain the first search result;
s54, sending the voice recognition result to the mobile terminal based on the communication connection, so that the mobile terminal searches according to the voice recognition result to obtain a second search result;
s55, receiving the second search result returned by the mobile terminal;
s56, sorting the first search result or the second search result, and determining a target search result according to the sorting result.
In the scenario of collaborative search between the electronic device and the mobile terminal, the steps S51-S56 only use the electronic device to recognize the voice information, and use the voice recognition result of the electronic device as the search condition, so that on one hand, the consumption of the mobile terminal can be reduced, and on the other hand, the searched resources are still more comprehensive, thereby improving the accuracy of the final search result. The specific implementation of the method shown in fig. 5 has already been described in the embodiment shown in fig. 2, and is not repeated here to avoid repetition.
Based on the description of the above embodiment of the search method, please refer to fig. 6, fig. 6 is a schematic structural diagram of a search apparatus provided in the embodiment of the present application, and as shown in fig. 6, the apparatus includes:
the voice acquisition module 61 is used for acquiring voice information of a user;
a search result obtaining module 62, configured to obtain a first search result of the local terminal and a second search result of the mobile terminal according to the voice information; the mobile terminal is in communication connection with the electronic equipment;
and a search result selection module 63, configured to rank the first search result or the second search result, and determine a target search result according to the rank result.
In a possible implementation manner, in terms of acquiring the first search result of the local terminal and the second search result of the mobile terminal according to the voice information, the search result acquisition module 62 is specifically configured to:
carrying out voice recognition on the voice information to obtain a voice recognition result;
searching according to the voice recognition result to obtain the first search result; and the number of the first and second groups,
sending the voice recognition result to the mobile terminal based on the communication connection, so that the mobile terminal carries out searching according to the voice recognition result to obtain a second search result;
and receiving the second search result returned by the mobile terminal.
In one possible embodiment, as shown in fig. 7, the apparatus further comprises an execution module 64; the execution module 64 is configured to:
executing the target search result at the local end under the condition that the target search result belongs to the first search result;
and executing the target search result on the mobile terminal based on the communication connection under the condition that the target search result belongs to the second search result.
In a possible implementation manner, in the aspect of sorting the first search result or the second search result and determining the target search result according to the sorting result, the search result selection module 63 is specifically configured to:
under the conditions that the first search result is empty and the second search result is not empty, obtaining a first satisfaction degree of each search result in the second search result, sequencing each search result in the second search result according to the first satisfaction degree, and determining the target search result from the sequenced second search result; wherein the first satisfaction degree is used for indicating the matching degree of each search result in the second search results and the voice recognition result;
under the condition that the first search result is not empty and the second search result is empty, acquiring a second satisfaction degree of each search result in the first search result; sequencing each search result in the first search results according to the second satisfaction degree, and determining the target search result from the sequenced first search results; wherein the second satisfaction degree is used for indicating the matching degree of each search result in the first search results and the voice recognition result;
in one possible implementation, the search result selection module 63 is further configured to:
under the condition that the first search result and the second search result are not empty, combining the first search result and the second search result to obtain a third search result;
acquiring a third satisfaction degree of each search result in the third search result;
sequencing each search result in the third search results according to the third satisfaction degree to determine the target search result; wherein the third satisfaction degree is used for indicating the matching degree of each search result in the third search results and the voice recognition result.
In a possible implementation manner, in terms of obtaining the first satisfaction degree of each search result in the second search result, the search result selection module 63 is specifically configured to:
acquiring the similarity between each search result in the second search results and the voice recognition result;
dividing the second search result into a first result set, a second result set and a third result set according to the similarity and a preset value;
and calculating the first satisfaction degree of each search result in the second search result by adopting a preset formula based on the first result set, the second result set and the third result set.
In one possible embodiment, the voice information includes a human voice and noise; in the aspect of performing speech recognition on the speech information to obtain a speech recognition result, the search result obtaining module 62 is specifically configured to:
inputting the voice information into a voice activity detection model for voice extraction to obtain N sections of voice information; n is an integer greater than 1;
acquiring the characteristic sequences of the N segments of voice information to obtain N segments of characteristic sequences;
generating a target characteristic sequence according to the N sections of characteristic sequences;
respectively intercepting P sections of target feature subsequences and R target features from the target feature sequences; p and R are integers greater than 1;
calculating the similarity between each target feature in the P-segment target feature subsequence and the R target features to obtain R similarity of each target feature;
acquiring an updated feature subsequence of each target feature subsequence in the P target feature subsequences based on the R similarity of each target feature;
updating the target characteristic sequence by adopting the updated characteristic sub-sequence to obtain an updated target characteristic sequence;
matching the updated target feature sequence with feature sequences corresponding to a plurality of texts stored in a text database to obtain the voice recognition result; the text database is used for storing the plurality of texts and the characteristic sequence corresponding to each text.
In a possible implementation manner, in obtaining the updated feature subsequence of each target feature subsequence in the P target feature subsequences based on the R similarities of each target feature, the search result obtaining module 62 is specifically configured to:
normalizing the R similarity of each target feature to obtain R weights of each target feature;
calculating R output features of each target feature based on each target feature and R weights of each target feature;
and summing the R output features of each target feature to obtain the updated feature of each target feature, and forming the updated feature subsequence by the updated feature of each target feature.
According to an embodiment of the present application, the units of the search apparatus shown in fig. 6 or fig. 7 may be respectively or entirely combined into one or several other units to form the unit, or some unit(s) thereof may be further split into multiple units with smaller functions to form the unit(s), which may achieve the same operation without affecting the achievement of the technical effect of the embodiment of the present application. The units are divided based on logic functions, and in practical application, the functions of one unit can be realized by a plurality of units, or the functions of a plurality of units can be realized by one unit. In other embodiments of the present application, the search apparatus may also include other units, and in practical applications, these functions may also be implemented by being assisted by other units, and may be implemented by cooperation of multiple units.
According to another embodiment of the present application, the search apparatus device shown in fig. 6 or fig. 7 may be constructed by running a computer program (including program codes) capable of executing the steps involved in the corresponding method shown in fig. 2 or fig. 5 on a general-purpose computing device such as a computer including a processing element such as a Central Processing Unit (CPU), a random access storage medium (RAM), a read-only storage medium (ROM), and a storage element, and the search method of the embodiment of the present application may be implemented. The computer program may be recorded on a computer-readable recording medium, for example, and loaded and executed in the above-described computing apparatus via the computer-readable recording medium.
Based on the description of the method embodiment and the device embodiment, the embodiment of the application further provides an electronic device. Referring to fig. 8, the electronic device includes at least a processor 81, an input device 82, an output device 83, and a computer storage medium 84. The processor 81, input device 82, output device 83, and computer storage medium 84 within the electronic device may be connected by a bus or other means.
A computer storage medium 84 may be stored in the memory of the electronic device, the computer storage medium 84 being for storing a computer program comprising program instructions, the processor 81 being for executing the program instructions stored by the computer storage medium 84. The processor 81 (or CPU) is a computing core and a control core of the electronic device, and is adapted to implement one or more instructions, and in particular, is adapted to load and execute the one or more instructions so as to implement a corresponding method flow or a corresponding function.
In one embodiment, the processor 81 of the electronic device provided in the embodiment of the present application may be configured to perform a series of search processes:
acquiring voice information of a user;
acquiring a first search result of the local terminal and a second search result of the mobile terminal according to the voice information; the mobile terminal is in communication connection with the electronic equipment;
and sequencing the first search result or the second search result, and determining a target search result according to the sequencing result.
In another embodiment, the processor 81 executes the acquiring of the first search result of the home terminal and the second search result of the mobile terminal according to the voice information, including:
carrying out voice recognition on the voice information to obtain a voice recognition result;
searching according to the voice recognition result to obtain the first search result; and the number of the first and second groups,
sending the voice recognition result to the mobile terminal based on the communication connection, so that the mobile terminal carries out searching according to the voice recognition result to obtain a second search result;
and receiving the second search result returned by the mobile terminal.
In yet another embodiment, the processor 81 is further configured to perform:
executing the target search result at the local end under the condition that the target search result belongs to the first search result;
and executing the target search result on the mobile terminal based on the communication connection under the condition that the target search result belongs to the second search result.
In another embodiment, the processor 81 performs the sorting of the first search result or the second search result, and determines a target search result according to the sorting result, including:
under the conditions that the first search result is empty and the second search result is not empty, obtaining a first satisfaction degree of each search result in the second search result, sequencing each search result in the second search result according to the first satisfaction degree, and determining the target search result from the sequenced second search result; wherein the first satisfaction degree is used for indicating the matching degree of each search result in the second search results and the voice recognition result;
under the condition that the first search result is not empty and the second search result is empty, acquiring a second satisfaction degree of each search result in the first search result; sequencing each search result in the first search results according to the second satisfaction degree, and determining the target search result from the sequenced first search results; wherein the second satisfaction degree is used for indicating the matching degree of each search result in the first search results and the voice recognition result.
In yet another embodiment, the processor 81 is further configured to:
under the condition that the first search result and the second search result are not empty, combining the first search result and the second search result to obtain a third search result;
acquiring a third satisfaction degree of each search result in the third search result;
sequencing each search result in the third search results according to the third satisfaction degree to determine the target search result; wherein the third satisfaction degree is used for indicating the matching degree of each search result in the third search results and the voice recognition result.
In another embodiment, the obtaining the first satisfaction degree of each of the second search results by the processor 81 includes:
acquiring the similarity between each search result in the second search results and the voice recognition result;
dividing the second search result into a first result set, a second result set and a third result set according to the similarity and a preset value;
and calculating the first satisfaction degree of each search result in the second search result by adopting a preset formula based on the first result set, the second result set and the third result set.
In yet another embodiment, processor 81 executes the voice message to include a human voice and noise; the voice recognition of the voice information to obtain a voice recognition result includes:
inputting the voice information into a voice activity detection model for voice extraction to obtain N sections of voice information; n is an integer greater than 1;
acquiring the characteristic sequences of the N segments of voice information to obtain N segments of characteristic sequences;
generating a target characteristic sequence according to the N sections of characteristic sequences;
respectively intercepting P sections of target feature subsequences and R target features from the target feature sequences; p and R are integers greater than 1;
calculating the similarity between each target feature in the P-segment target feature subsequence and the R target features to obtain R similarity of each target feature;
acquiring an updated feature subsequence of each target feature subsequence in the P target feature subsequences based on the R similarity of each target feature;
updating the target characteristic sequence by adopting the updated characteristic sub-sequence to obtain an updated target characteristic sequence;
matching the updated target feature sequence with feature sequences corresponding to a plurality of texts stored in a text database to obtain the voice recognition result; the text database is used for storing the plurality of texts and the characteristic sequence corresponding to each text.
In another embodiment, the processor 81 executes the obtaining of the updated feature subsequence of each target feature subsequence in the P target feature subsequences based on the R similarities of each target feature, including:
normalizing the R similarity of each target feature to obtain R weights of each target feature;
calculating R output features of each target feature based on each target feature and R weights of each target feature;
and summing the R output features of each target feature to obtain the updated feature of each target feature, and forming the updated feature subsequence by the updated feature of each target feature.
By way of example, electronic devices may include, but are not limited to, a processor 81, an input device 82, an output device 83, and a computer storage medium 84. It will be appreciated by those skilled in the art that the schematic diagrams are merely examples of an electronic device and are not limiting of an electronic device and may include more or fewer components than those shown, or some components in combination, or different components.
It should be noted that, since the processor 81 of the electronic device executes the computer program to implement the steps in the searching method, the embodiments of the searching method are all applicable to the electronic device, and all can achieve the same or similar beneficial effects.
An embodiment of the present application further provides a computer storage medium (Memory), which is a Memory device in an electronic device and is used to store programs and data. It is understood that the computer storage medium herein may include a built-in storage medium in the terminal, and may also include an extended storage medium supported by the terminal. The computer storage medium provides a storage space that stores an operating system of the terminal. Also stored in this memory space are one or more instructions, which may be one or more computer programs (including program code), suitable for loading and execution by processor 81. The computer storage medium may be a high-speed RAM memory, or may be a non-volatile memory (non-volatile memory), such as at least one disk memory; alternatively, it may be at least one computer storage medium located remotely from the processor 81. In one embodiment, one or more instructions stored in a computer storage medium may be loaded and executed by processor 81 to perform the corresponding steps described above with respect to the search method.
Illustratively, the computer program of the computer storage medium includes computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, and the like. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like.
It should be noted that, since the computer program of the computer storage medium is executed by the processor to implement the steps in the above-mentioned searching method, all the embodiments of the searching method are applicable to the computer storage medium, and can achieve the same or similar beneficial effects.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (11)

1. A search method applied to an electronic device, the method comprising:
acquiring voice information of a user;
acquiring a first search result of the local terminal and a second search result of the mobile terminal according to the voice information; the mobile terminal is in communication connection with the electronic equipment;
and sequencing the first search result or the second search result, and determining a target search result according to the sequencing result.
2. The method according to claim 1, wherein the obtaining a first search result of the local terminal and a second search result of the mobile terminal according to the voice information comprises:
carrying out voice recognition on the voice information to obtain a voice recognition result;
searching according to the voice recognition result to obtain the first search result; and the number of the first and second groups,
sending the voice recognition result to the mobile terminal based on the communication connection, so that the mobile terminal carries out searching according to the voice recognition result to obtain a second search result;
and receiving the second search result returned by the mobile terminal.
3. The method of claim 1, wherein after determining the target search result, the method further comprises:
executing the target search result at the local end under the condition that the target search result belongs to the first search result;
and executing the target search result on the mobile terminal based on the communication connection under the condition that the target search result belongs to the second search result.
4. The method of claim 2, wherein the ranking the first search result or the second search result and determining the target search result according to the ranking result comprises:
under the conditions that the first search result is empty and the second search result is not empty, obtaining a first satisfaction degree of each search result in the second search result, sequencing each search result in the second search result according to the first satisfaction degree, and determining the target search result from the sequenced second search result; wherein the first satisfaction degree is used for indicating the matching degree of each search result in the second search results and the voice recognition result;
under the condition that the first search result is not empty and the second search result is empty, acquiring a second satisfaction degree of each search result in the first search result; sequencing each search result in the first search results according to the second satisfaction degree, and determining the target search result from the sequenced first search results; wherein the second satisfaction degree is used for indicating the matching degree of each search result in the first search results and the voice recognition result.
5. The method of claim 2, further comprising:
under the condition that the first search result and the second search result are not empty, combining the first search result and the second search result to obtain a third search result;
acquiring a third satisfaction degree of each search result in the third search result;
sequencing each search result in the third search results according to the third satisfaction degree to determine the target search result; wherein the third satisfaction degree is used for indicating the matching degree of each search result in the third search results and the voice recognition result.
6. The method of claim 4, wherein obtaining the first satisfaction degree of each of the second search results comprises:
acquiring the similarity between each search result in the second search results and the voice recognition result;
dividing the second search result into a first result set, a second result set and a third result set according to the similarity and a preset value;
and calculating the first satisfaction degree of each search result in the second search result by adopting a preset formula based on the first result set, the second result set and the third result set.
7. The method of claim 2, wherein the voice information comprises a human voice and noise; the voice recognition of the voice information to obtain a voice recognition result includes:
inputting the voice information into a voice activity detection model for voice extraction to obtain N sections of voice information; n is an integer greater than 1;
acquiring the characteristic sequences of the N segments of voice information to obtain N segments of characteristic sequences;
generating a target characteristic sequence according to the N sections of characteristic sequences;
respectively intercepting P sections of target feature subsequences and R target features from the target feature sequences; p and R are integers greater than 1;
calculating the similarity between each target feature in the P-segment target feature subsequence and the R target features to obtain R similarity of each target feature;
acquiring an updated feature subsequence of each target feature subsequence in the P target feature subsequences based on the R similarity of each target feature;
updating the target characteristic sequence by adopting the updated characteristic sub-sequence to obtain an updated target characteristic sequence;
matching the updated target feature sequence with feature sequences corresponding to a plurality of texts stored in a text database to obtain the voice recognition result; the text database is used for storing the plurality of texts and the characteristic sequence corresponding to each text.
8. The method according to claim 7, wherein the obtaining an updated feature subsequence of each of the P target feature subsequences based on the R similarities of each target feature comprises:
normalizing the R similarity of each target feature to obtain R weights of each target feature;
calculating R output features of each target feature based on each target feature and R weights of each target feature;
and summing the R output features of each target feature to obtain the updated feature of each target feature, and forming the updated feature subsequence by the updated feature of each target feature.
9. A search apparatus, characterized in that the apparatus comprises:
the voice acquisition module is used for acquiring voice information of a user;
the search result acquisition module is used for acquiring a first search result of the local terminal and a second search result of the mobile terminal according to the voice information; the mobile terminal is in communication connection with the electronic equipment;
and the search result selection module is used for sequencing the first search result or the second search result and determining a target search result according to the sequencing result.
10. An electronic device comprising an input device and an output device, further comprising:
a processor adapted to implement one or more instructions; and the number of the first and second groups,
a computer storage medium having one or more instructions stored thereon, the one or more instructions adapted to be loaded by the processor and to perform the method of any of claims 1-8.
11. A computer storage medium having stored thereon one or more instructions adapted to be loaded by a processor and to perform the method of any of claims 1-8.
CN202011056692.3A 2020-09-29 2020-09-29 Searching method, searching device, electronic equipment and storage medium Pending CN112199587A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011056692.3A CN112199587A (en) 2020-09-29 2020-09-29 Searching method, searching device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011056692.3A CN112199587A (en) 2020-09-29 2020-09-29 Searching method, searching device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112199587A true CN112199587A (en) 2021-01-08

Family

ID=74008290

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011056692.3A Pending CN112199587A (en) 2020-09-29 2020-09-29 Searching method, searching device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112199587A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103123625A (en) * 2011-11-18 2013-05-29 联想(北京)有限公司 Search method, search unit and search device
CN103246708A (en) * 2013-04-16 2013-08-14 康佳集团股份有限公司 Multi-screen interactive search method and system based on intelligent terminals
CN107357875A (en) * 2017-07-04 2017-11-17 北京奇艺世纪科技有限公司 A kind of voice search method, device and electronic equipment
CN107967333A (en) * 2017-11-28 2018-04-27 广东小天才科技有限公司 Voice search method, voice searching device and electronic equipment
CN108682415A (en) * 2018-05-23 2018-10-19 广州视源电子科技股份有限公司 voice search method, device and system
CN111611372A (en) * 2019-02-25 2020-09-01 北京嘀嘀无限科技发展有限公司 Search result sorting method and device and music searching method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103123625A (en) * 2011-11-18 2013-05-29 联想(北京)有限公司 Search method, search unit and search device
CN103246708A (en) * 2013-04-16 2013-08-14 康佳集团股份有限公司 Multi-screen interactive search method and system based on intelligent terminals
CN107357875A (en) * 2017-07-04 2017-11-17 北京奇艺世纪科技有限公司 A kind of voice search method, device and electronic equipment
CN107967333A (en) * 2017-11-28 2018-04-27 广东小天才科技有限公司 Voice search method, voice searching device and electronic equipment
CN108682415A (en) * 2018-05-23 2018-10-19 广州视源电子科技股份有限公司 voice search method, device and system
CN111611372A (en) * 2019-02-25 2020-09-01 北京嘀嘀无限科技发展有限公司 Search result sorting method and device and music searching method and device

Similar Documents

Publication Publication Date Title
CN109918673B (en) Semantic arbitration method and device, electronic equipment and computer-readable storage medium
CN109189991B (en) Duplicate video identification method, device, terminal and computer readable storage medium
US8886635B2 (en) Apparatus and method for recognizing content using audio signal
US20180052824A1 (en) Task identification and completion based on natural language query
US10762150B2 (en) Searching method and searching apparatus based on neural network and search engine
CN112037792B (en) Voice recognition method and device, electronic equipment and storage medium
WO2021135455A1 (en) Semantic recall method, apparatus, computer device, and storage medium
CN111324700A (en) Resource recall method and device, electronic equipment and computer-readable storage medium
CN111414512A (en) Resource recommendation method and device based on voice search and electronic equipment
CN115830649A (en) Network asset fingerprint feature identification method and device and electronic equipment
CN111243604B (en) Training method for speaker recognition neural network model supporting multiple awakening words, speaker recognition method and system
CN113505272A (en) Behavior habit based control method and device, electronic equipment and storage medium
CN116108150A (en) Intelligent question-answering method, device, system and electronic equipment
CN110059172B (en) Method and device for recommending answers based on natural language understanding
CN115640398A (en) Comment generation model training method, comment generation device and storage medium
CN111444321B (en) Question answering method, device, electronic equipment and storage medium
CN116628142B (en) Knowledge retrieval method, device, equipment and readable storage medium
CN107680598B (en) Information interaction method, device and equipment based on friend voiceprint address list
CN113901837A (en) Intention understanding method, device, equipment and storage medium
CN111858865A (en) Semantic recognition method and device, electronic equipment and computer-readable storage medium
CN112199587A (en) Searching method, searching device, electronic equipment and storage medium
CN114840762A (en) Recommended content determining method and device and electronic equipment
CN112395402A (en) Depth model-based recommended word generation method and device and computer equipment
CN113157582A (en) Method and device for determining execution sequence of test script
CN113282264A (en) Data processing method and device, intelligent equipment and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 201821 room 208, building 4, No. 1411, Yecheng Road, Jiading Industrial Zone, Jiading District, Shanghai

Applicant after: Botai vehicle networking technology (Shanghai) Co.,Ltd.

Address before: Room 208, building 4, 1411 Yecheng Road, Jiading Industrial Zone, Jiading District, Shanghai, 201800

Applicant before: SHANGHAI PATEO ELECTRONIC EQUIPMENT MANUFACTURING Co.,Ltd.