CN107015979B - Data processing method and device and intelligent terminal - Google Patents

Data processing method and device and intelligent terminal Download PDF

Info

Publication number
CN107015979B
CN107015979B CN201610055824.8A CN201610055824A CN107015979B CN 107015979 B CN107015979 B CN 107015979B CN 201610055824 A CN201610055824 A CN 201610055824A CN 107015979 B CN107015979 B CN 107015979B
Authority
CN
China
Prior art keywords
data
query
displaying
display
tag
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610055824.8A
Other languages
Chinese (zh)
Other versions
CN107015979A (en
Inventor
马金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Banma Zhixing Network Hongkong Co Ltd
Original Assignee
Banma Zhixing Network Hongkong Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Banma Zhixing Network Hongkong Co Ltd filed Critical Banma Zhixing Network Hongkong Co Ltd
Priority to CN201610055824.8A priority Critical patent/CN107015979B/en
Publication of CN107015979A publication Critical patent/CN107015979A/en
Application granted granted Critical
Publication of CN107015979B publication Critical patent/CN107015979B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9032Query formulation
    • G06F16/90324Query formulation using system suggestions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9032Query formulation
    • G06F16/90332Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a data processing method and device and an intelligent terminal, so that the query efficiency is improved. The method comprises the following steps: receiving audio data on a current display interface, and determining query keywords according to the audio data; displaying a data label corresponding to the query keyword; and executing response operation when the trigger to the data tag is received. The method and the device can automatically match accurate query keywords for the user, and improve query efficiency.

Description

Data processing method and device and intelligent terminal
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a data processing method, a data processing apparatus, and an intelligent terminal.
Background
With the development of the technology, users can search various required resource information in the network, such as multimedia resources like audio and video, entertainment resources like news real-life resources and games, shopping resources, and the like.
Because resources in the network are very rich, the network can meet the requirements of various users, but the rich resources also provide inconvenience for the user to inquire, and the user often needs to provide accurate inquiry words to quickly inquire the required content. If the query word is not accurate, much time is likely to be wasted in searching data.
Therefore, one technical problem that needs to be urgently solved by those skilled in the art is: a data processing method, a data processing device and an intelligent terminal are provided to improve query efficiency.
Disclosure of Invention
The technical problem to be solved by the embodiments of the present application is to provide a data processing method to improve query efficiency.
Correspondingly, the embodiment of the application also provides a data processing device and an intelligent terminal, which are used for ensuring the realization and the application of the method.
In order to solve the above problem, an embodiment of the present application discloses a data processing method, including: receiving audio data on a current display interface, and determining query keywords according to the audio data; displaying a data label corresponding to the query keyword; and executing response operation when the trigger to the data tag is received.
Optionally, the current display interface includes: the interface element comprises a target element, and the function button comprises a voice button.
Optionally, the receiving audio data on the current display interface, and determining a query keyword according to the audio data includes: receiving input audio data according to the triggering of the voice button; identifying the audio data and determining text data; and performing semantic recognition on the text data, and determining corresponding query keywords.
Optionally, performing semantic recognition on the text data, and determining a corresponding query keyword, including: performing semantic recognition on the text data to determine semantic keywords; and determining at least one associated query keyword according to the voice keyword.
Optionally, before displaying the data tag corresponding to the query keyword, the method further includes: and generating a data tag by adopting the query key words and the target elements.
Optionally, the generating the data tag by using the query keyword and the target element includes: and associating each query keyword with a target element respectively, and configuring the query keywords on the associated target elements to generate a data tag.
Optionally, when there is more than one target element, the more than one target element is arranged according to a preset format.
Optionally, displaying the data tag includes: and replacing each target element arranged in a preset format with the data label, and expanding and displaying each data label.
Optionally, the expanding and displaying each data tag includes: randomly configuring the display size of each data label; and diffusing each data label to the current display interface according to the display size for displaying.
Optionally, when the display size exceeds a size threshold, the query keyword is displayed on the data tag.
Optionally, the method further includes: and adjusting the display of each data label in the current display interface according to the input of the user.
Optionally, when receiving the trigger to the data tag, the performing a response operation includes: when the trigger of the data label is received, acquiring a query keyword corresponding to the data label; and executing search according to the query key words and displaying corresponding search results.
Optionally, the method further includes: performing search on the semantic keywords to determine search results; and displaying the search result while displaying the data tag.
Optionally, the method further includes: and when the search result is judged to be viewed, gathering the data label to one side of the current display interface for displaying.
Optionally, the method further includes: and updating the data label according to the newly added audio data.
The embodiment of the present application further discloses a data processing apparatus, including: the keyword determining module is used for receiving audio data on a current display interface and determining query keywords according to the audio data; the label display module is used for displaying the data labels corresponding to the query keywords; and the response module is used for executing response operation when receiving the trigger of the data tag.
Optionally, the current display interface includes: the interface element comprises a target element, and the function button comprises a voice button.
Optionally, the keyword determination module includes: the audio receiving submodule is used for receiving input audio data according to the triggering of the voice button; the voice recognition submodule is used for recognizing the audio data and determining text data; and the semantic recognition submodule is used for performing semantic recognition on the text data and determining corresponding query keywords.
Optionally, the semantic recognition sub-module is configured to perform semantic recognition on the text data to determine a semantic keyword; and determining at least one associated query keyword according to the voice keyword.
Optionally, the tag display module is further configured to generate a data tag by using the query keyword and the target element.
Optionally, the tag display module includes: and the generation sub-module is used for associating each query keyword with the target element respectively, and then configuring the query keywords on the associated target elements to generate the data tags.
Optionally, when there is more than one target element, the more than one target element is arranged according to a preset format.
Optionally, the tag display module includes: and the display submodule is used for replacing each target element arranged in a preset format by the data label and expanding and displaying each data label.
Optionally, the display sub-module is configured to randomly configure the display size of each data tag; and diffusing each data label to the current display interface according to the display size for displaying.
Optionally, when the display size exceeds a size threshold, the query keyword is displayed on the data tag.
Optionally, the display sub-module is further configured to adjust display of each data tag in the current display interface according to user input.
Optionally, the response module is configured to obtain a query keyword corresponding to the data tag when receiving a trigger to the data tag; and executing search according to the query key words and displaying corresponding search results.
Optionally, the tag display module is further configured to perform search on the semantic keyword to determine a search result; and displaying the search result while displaying the data tag.
Optionally, the response module is further configured to gather the data tag to one side of the current display interface for display when determining to view the search result.
Optionally, the tag display module is further configured to update the data tag according to the newly added audio data.
The embodiment of the application also discloses an intelligent terminal, intelligent terminal includes: memory, display, processor and input unit, wherein, the input unit includes: a touch screen; the processor is configured to perform the method according to the embodiments of the present application.
Compared with the prior art, the embodiment of the application has the following advantages:
in the embodiment of the application, the audio data is received on the current display interface, the query keyword corresponding to the audio data is determined on the basis of the data, the query keyword required by the query can be automatically matched for the user, and the data tag corresponding to the query keyword is displayed, so that when the trigger of the data tag is received, response operation is executed, for example, query is executed, the accurate query keyword can be automatically matched for the user, and the query efficiency is improved.
Drawings
FIG. 1 is a flow chart of the steps of an embodiment of a data processing method of the present application;
FIG. 2 is a flow chart of steps of another data processing method embodiment of the present application;
FIG. 3 is a first schematic diagram of a display interface in an embodiment of the present application;
FIG. 4 is a second schematic diagram of a display interface in an embodiment of the present application;
FIG. 5 is a third schematic diagram of a display interface in an embodiment of the present application;
FIG. 6 is a block diagram of an embodiment of a data processing apparatus according to the present application;
FIG. 7 is a block diagram of another data processing apparatus embodiment of the present application;
fig. 8 is a block diagram of an embodiment of an intelligent terminal according to the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
One of the core concepts of the embodiments of the present application is to provide a data processing method, device and intelligent control unit to improve query efficiency. The method comprises the steps of receiving audio data on a current display interface, determining query keywords corresponding to the audio data on the basis of the data, automatically matching the query keywords required by a user, and displaying data labels corresponding to the query keywords, so that when triggering of the data labels is received, response operation is executed, for example, query is executed, accurate data labels can be automatically matched for the user, and query efficiency is improved.
In this embodiment, the data processing method can be applied to an intelligent terminal, where the intelligent terminal refers to a terminal device with a multimedia function, and the device supports audio, video, data and other functions. In this embodiment, the intelligent terminal has a touch screen, and includes an intelligent mobile terminal such as a smart phone, a tablet computer, and an intelligent wearable device, and may also be a smart television, a personal computer, and other devices having a touch screen.
Example one
Referring to fig. 1, a flowchart illustrating steps of an embodiment of a data processing method according to the present application is shown, which may specifically include the following steps:
and 102, receiving audio data on a current display interface, and determining query keywords according to the audio data.
In this embodiment, the intelligent terminal displays a display interface capable of interacting with the user, where the display interface is capable of performing voice interaction with the user, for example, after receiving audio data input by the user, corresponding interaction information may be fed back, and the interaction information may include various contents such as a text, a voice, a picture, and a data entry. The current display interface thus includes: the device comprises interface elements and function buttons, wherein the function buttons comprise voice buttons and other buttons, such as a writing button, a shooting button and the like, the contents of an information input box, a virtual key and the like can be displayed after the writing button is triggered, and a camera can be called to execute shooting when the shooting button is triggered. The interface elements include elements displayed in the interface, such as display elements provided for aesthetic purposes, and the interface elements include target elements.
Therefore, when a user executes voice input by triggering a voice button or in other modes, the user can correspondingly receive input audio data, then the audio data is identified, and query keywords meeting the requirements of the user are determined. The embodiment is based on mass data, and after the user query intention of the audio data is identified, the query keywords required by the user can be matched in the mass data based on the query intention, so that the query keywords with both accuracy and breadth are provided for the user.
And 104, displaying the data label corresponding to the query keyword.
And 106, executing response operation when the trigger to the data tag is received.
The data tag corresponding to each query keyword may then be determined and then displayed. For example, the text content of the data tag (e.g., the query keyword) is directly displayed in the current interface, and for example, the data tag having the display appearance of a circle, a square, or the like is displayed in the current interface, the data tag corresponding to the query keyword may be displayed in a variety of different manners, which is not limited in this embodiment of the application.
For example, one display mode is to associate a query keyword with a target element in an interface element and generate a corresponding data tag, that is, to associate an existing interface element in an interface with the query keyword, so as to update the existing element in the interface to the data tag, and display the data tag in the interface. The data labels are used for responding to the query keywords, if the query keywords are queried, the data labels are adopted to replace target elements in the current display interface for display, and therefore the data labels corresponding to the data labels are directly displayed.
The data label can be used as a data inlet, a user can search the data label meeting the needs of the user based on the data label, and then the data label is triggered, so that when the trigger of the data label is received, response operation is executed, such as query execution and the like.
In summary, the audio data is received on the current display interface, the query keyword corresponding to the audio data is determined based on the data, the query keyword required by the query can be automatically matched for the user, and then the data tag corresponding to the query keyword is displayed, so that when the trigger to the data tag is received, a response operation, such as query execution, is executed, the accurate data tag can be automatically matched for the user, and the query efficiency is improved.
Example two
Referring to fig. 2, a flowchart illustrating steps of another embodiment of a data processing method according to the present application is shown, which may specifically include the following steps:
step 202, displaying a display interface, wherein target elements in the display interface are arranged according to a preset format.
When the user uses the intelligent terminal, the application or the control can be started to execute the required operation, and the application or the control can display the respective display interface. The display interface comprises interface elements and function buttons, the interface elements comprise target elements, and when more than one target elements exist, the more than one target elements are arranged according to a preset format.
As shown in the schematic diagram of the display interface shown in fig. 3, the display interface includes function buttons such as a write button, a voice button, and a shooting button, which are arranged in sequence, and also includes circular target elements arranged according to a circle. The display interface supports the user to perform operations in various ways such as characters, voice, pictures and the like, and can give a prompt to the user, such as displaying a text content "what can help you? "and of course, the corresponding data such as the prompt voice can also be played.
And step 204, receiving input audio data according to the triggering of the voice button.
Step 206, identifying the audio data and determining text data.
And 208, performing semantic recognition on the text data, and determining corresponding query keywords.
In the display interface, a user can input voice through triggering the voice button, and correspondingly, input audio data can be received according to the triggering of the voice button. And then, the audio data is identified, for example, the audio data is subjected to operations such as feature extraction and matching, and the corresponding text data is identified.
And performing semantic recognition on the text data, namely recognizing the main intention of the text data, and then determining the query keywords related to the intention of the user. Performing semantic recognition on the text data, and determining a corresponding data tag, including: performing semantic recognition on the text data to determine semantic keywords; and determining at least one query keyword according to the voice keywords, wherein each query keyword corresponds to one data tag. Performing semantic recognition on the text data, for example, performing word segmentation and other processing on the text data, matching according to a voice model, a syntax model and the like, recognizing corresponding semantic keywords, that is, keywords meeting the semantics of the text data, and then further matching at least one query keyword with the semantic keywords, where the query keyword is a keyword related to the semantics, such as data for describing the semantic keywords.
For example, the text data corresponding to the audio data of the user is recognized as "i want to buy a lady dress", the corresponding matching query keywords are "lady" and "dress", and a plurality of data tags such as "star identity", "tailing", "advanced customization" and "lotus leaf edge" can be matched based on the query keywords.
Step 210, associating each query keyword with a target element, and configuring the query keywords on the associated target elements to generate data tags.
And 212, replacing each target element arranged in the preset format with the data label, and expanding and displaying each data label.
After the data tags are determined, a display may be made to show the query terms for the user. In this embodiment, each query keyword may be associated with a target element, that is, each query keyword is configured with a corresponding associated target element, and then the query keyword is configured on the target element to generate a corresponding data tag, that is, the data tag is a data entry for displaying the query keyword on the target element. And then, respectively replacing each target element arranged according to the preset format by each data label, so that each target element arranged according to the preset format is expanded and displayed in the current display interface, namely, each data label is displayed and displayed.
For a user, target elements arranged in a preset format in a display interface are expanded, only the expanded target elements are displayed with query keywords, namely the target elements are actually expanded, and therefore data tags with the query keywords are distributed in the display interface.
In this embodiment, the expanding and displaying each data label includes: randomly configuring the display size of each data label; and diffusing each data label to the current display interface according to the display size for displaying.
For more beautiful display, the display size of each data label may be configured randomly, for example, if the data label is a circle, the diameters of different data labels may be configured randomly, and if the data label is a polygon, the size data such as the diagonal lines of different data labels may be configured randomly, so as to generate various data labels with different display sizes. In addition, in this embodiment, the shape of the data tag may be consistent with the shape of the target element, and the shapes of different target elements in the display interface may be the same or different, and may include various shapes such as a circle, a triangle, a pentagon, and the like. In addition, the display size of the data tag may be configured according to other rules, such as the search heat of the query keyword corresponding to the voice keyword.
After the display size of each data label is configured, the data labels of each size can be randomly diffused to the current display interface for display, that is, the data labels are randomly distributed in the display interface in the process of unfolding each data label. The data labels with medium sizes distributed randomly are displayed in the display interface.
In an alternative embodiment of the present application, the query terms are displayed on the data tag when the display size exceeds a size threshold. Because there are a plurality of query keywords and a plurality of correspondingly generated data tags, a part of the data tags can be configured to display the query keywords, and the other part of the data tags does not display the query keywords, and whether the query keywords are displayed or not can be determined based on the display size of the data tags. The present embodiment is therefore configured with a size threshold that is used to determine whether a data tag is displayed.
Therefore, after the display size of each data tag is randomly configured, whether the display size exceeds a size threshold can be determined, if the display size does not exceed the size threshold, the data tag in the display interface does not display the query keyword, and if the display size exceeds the size threshold, the data tag is displayed in the display interface and the query keyword is displayed on the data tag.
In the embodiment of the present application, after displaying the data tag, the user may determine whether there is a required query keyword, and if there is no required query keyword, the user may continue to input audio data to describe the content of the required query (see step 230). Alternatively, the data tag in the current display interface is adjusted by sliding or the like, so as to adjust the query keyword (see step 240).
Step 230 is further included to update the data tag according to the newly added audio data. In the whole query process, a user can input audio data at any time to perform supplementary description on query content, after receiving newly-added audio data, the recognition similar to the audio data can be executed to identify semantic keywords, then the query keywords are matched, updated data tags are generated by adopting the query keywords and target elements, and the data tags in a display interface are updated.
Step 240 is further included for adjusting the display of each data tag in the current display interface based on the user input. When the data label with the query keyword in the current display interface does not have the content required by the user correspondingly, the user can input the data label in the current display interface in a sliding mode, a shaking mode and the like, the user input is correspondingly obtained, and the data such as the display size of the data label is adjusted based on the user input. For example, the display size of the data tag partially displayed with the query keyword becomes small so that the query keyword is not displayed any more. And the display size of part of the data labels which do not display the query keywords is increased to display the corresponding query keywords, so that the query keywords in the display interface are adjusted, various query keywords are provided for the user, the user requirements are met, and the query efficiency is improved.
In another optional embodiment of the present application, the data tag may also be deleted, for example, by clicking, dragging, and the like. For example, pressing the data tag for a long time can display an extended window, and various options such as search, deletion and the like can be displayed in the extended window, so that corresponding operations are performed on the data tag. As shown in fig. 4, an icon of "trash can" may be displayed on an upper portion of the display interface, and after a useless data tab is selected, the useless data tab may be dragged to the icon of "trash can" to be deleted. The remaining data tags may be automatically adjusted after the data tags are deleted.
Step 214, performing a search according to the keywords, and displaying the search result.
In this embodiment, the keywords include query keywords and semantic keywords, and when performing search, the search may be performed in combination with the query keywords and the semantic keywords to determine a search result. Wherein the search includes an automatic search and a search based on user feedback.
In an optional embodiment of the present application, when receiving a trigger to the data tag, performing a response operation includes: when the trigger of the data label is received, acquiring a query keyword corresponding to the data label; and executing search according to the query key words and displaying corresponding search results.
After the data tags are displayed in the display interface, a user can click on the data tags with interest and demands and perform other triggering operations, so as to execute searching, namely searching based on user feedback. Therefore, when the trigger to the data tag is received, the query keyword corresponding to the data tag can be obtained, then the query keyword and the semantic keyword are adopted to carry out combined search, the corresponding search result is obtained, and the search result is displayed in the display interface. For example, if the triggered query keyword is "star-like style", the triggered query keyword is matched with the voice keywords "lady" and "formal dress", and the "lady formal dress with star-like style" can be queried to obtain a corresponding search result. When the search result is displayed, the data tag may be closed, or the search result may be displayed while the data tag is displayed, and for example, the transparency of the data tag is configured, and the data tag is suspended on the search result for display, which is not limited in this embodiment.
In another optional embodiment of the present application, the method further comprises: executing search according to the semantic keywords and determining a search result; and displaying the search result while displaying the data tag.
In the actual process, in order to improve the query efficiency, a part of search results can be displayed while the data tags with the query keywords are displayed for the user, so that the query keywords and the search results are provided for the user, namely, the automatic search is executed. Therefore, after the voice keywords are identified, the semantic keywords can be adopted for searching, and corresponding search results are determined, so that the data tags are displayed and the corresponding search results are displayed at the same time.
As shown in the display interface diagram of fig. 4, a data tag is displayed on the upper half of the display interface, where a part of the data tag includes a query keyword, and a target element may also be currently displayed. And the lower half part of the display interface displays the search results.
And step 216, gathering the data labels to one side of the current display interface for displaying when the search result is judged to be viewed.
After displaying the search results, the user may query the search results, for example, by sliding up or the like to view the search results. Therefore, whether the search result is checked currently can be judged, if the search result is judged to be checked, in order to reduce the interference on the checking of the search result, the data tags can be gathered together, namely, the data tags are gathered to one side of the current display interface to be displayed, the data tags can be gathered to any one of the upper side, the lower side, the left side and the right side, the display size of the data tags can be adjusted to be smaller than the preset size, namely, the query key words are not displayed, and the interference on the checking of the search result is reduced as much as possible.
As shown in FIG. 5, when the search result is viewed, the data tags are gathered to the upper part of the display interface for displaying, so that the interference on the viewing of the search result is reduced.
After the data labels are gathered and displayed so that a user can conveniently check search results, if the user needs to check search results of other keywords, the position of the gathered data labels can be triggered, after the trigger is received, the data labels can be unfolded, the unfolding and displaying process is similar to the process, namely the data labels displaying the query keywords are obtained by adjusting the display size of the data labels, and then the data labels are randomly distributed in the display interface.
According to the embodiment of the application, better query experience is provided for the user on the basis of mass data, and the method and the device can be applied to various query scenes, such as shopping query, news query, application query and the like. Taking shopping query as an example, an application or a control with an intelligent shopping guide function can be provided for a user, and based on mass data, the user can interact with the application or the control in a display interface, namely interact through texts, audios, pictures and the like, for example, direct communication through natural language indicates query intention. The application or the control can recognize user semantics based on interaction, namely, the query intention of the user is obtained, so that a corresponding data tag is generated and displayed in the display interface.
In this embodiment, target elements arranged in a certain shape may be displayed in a display interface of an application or a control, so that after a query keyword is generated, a data tag is generated by using the query keyword and the target elements, and the target elements are replaced by the data tag, thereby displaying and distributing the target elements in the display interface. For display, the data tags may be expanded in the display interface like particles. The user can check more checking keywords through the data tags adjusted in various modes such as sliding, shaking and the like.
In this embodiment, when the user wants to accurately express the search intention, the user may further input the voice data again, so that the data tag is determined to be displayed based on the newly added audio data by continuing to match the query keyword. Therefore, after the query keyword is updated, the data label can be updated, and the data label displayed in the display interface is correspondingly adjusted.
After the search result is displayed in the interface, the user may view the search result in various ways such as sliding, and at this time, in order to reduce the influence on the search result, the data tag may be displayed in a gathered manner on one side of the display interface, for example, suspended at the top. After the data tag is triggered again, the data tag can be unfolded for display.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the embodiments. Further, those skilled in the art will also appreciate that the embodiments described in the specification are presently preferred and that no particular act is required of the embodiments of the application.
EXAMPLE III
On the basis of the above embodiments, the present embodiment also provides a data processing apparatus.
Referring to fig. 6, a block diagram of a data processing apparatus according to an embodiment of the present application is shown, which may specifically include the following modules:
the keyword determining module 602 is configured to receive audio data in a current display interface, and determine a query keyword according to the audio data.
And a tag display module 604, configured to display a data tag corresponding to the query keyword.
A response module 606, configured to execute a response operation when the trigger for the data tag is received.
In summary, the audio data is received on the current display interface, the query keyword corresponding to the audio data is determined based on the data, the query keyword required by the query can be automatically matched for the user, and then the data tag corresponding to the query keyword is displayed, so that when the trigger to the data tag is received, a response operation, such as query execution, is executed, the accurate data tag can be automatically matched for the user, and the query efficiency is improved.
Referring to fig. 7, a block diagram of another data processing apparatus according to another embodiment of the present application is shown, which may specifically include the following modules:
and a keyword determining module 702, configured to receive audio data on the current display interface, and determine a query keyword according to the audio data.
And a tag display module 704, configured to display a data tag corresponding to the query keyword.
A response module 706 configured to execute a response operation when the trigger for the data tag is received.
Wherein the current display interface comprises: the interface element comprises a target element, and the function button comprises a voice button.
The keyword determination module 702 includes:
and the audio receiving submodule 7022 is configured to receive input audio data according to the triggering of the voice button.
And the voice recognition sub-module 7024 is configured to recognize the audio data and determine text data.
And the semantic recognition sub-module 7026 is configured to perform semantic recognition on the text data to determine corresponding query keywords.
The semantic identifier module 7026 is configured to perform semantic identification on the text data to determine semantic keywords; and determining at least one associated query keyword according to the voice keyword.
The tag display module 704 is further configured to generate a data tag by using the query keyword and the target element.
The tag display module 704 includes:
the generating sub-module 7042 is configured to associate each query keyword with a target element, and configure the query keyword on the associated target element to generate a data tag.
Wherein, when there is more than one target element, the more than one target elements are arranged according to a preset format.
And the display sub-module 7044 is configured to replace each target element arranged in the preset format with the data tag, and expand and display each data tag.
The display sub-module 7044 is configured to randomly configure the display size of each data tag; and diffusing each data label to the current display interface according to the display size for displaying.
Wherein the query keyword is displayed on the data tag when the display size exceeds a size threshold.
The display sub-module 7044 is further configured to adjust display of each data tag in the current display interface according to user input.
The response module 706 is configured to obtain a query keyword corresponding to the data tag when the trigger for the data tag is received; and executing search according to the query key words and displaying corresponding search results.
The tag display module 704 is further configured to perform a search on the semantic keyword to determine a search result; and displaying the search result while displaying the data tag.
The response module 706 is further configured to gather the data labels to one side of the current display interface for display when determining to view the search result.
The tag display module 704 is further configured to update the data tag according to the newly added audio data.
According to the embodiment of the application, better query experience is provided for the user on the basis of mass data, and the method and the device can be applied to various query scenes, such as shopping query, news query, application query and the like. Taking shopping query as an example, an application or a control with an intelligent shopping guide function can be provided for a user, and based on mass data, the user can interact with the application or the control in a display interface, namely interact through texts, audios, pictures and the like, for example, direct communication through natural language indicates query intention. The application or the control can recognize user semantics based on interaction, namely, the query intention of the user is obtained, so that a corresponding data tag is generated and displayed in the display interface.
In this embodiment, target elements arranged in a certain shape may be displayed in a display interface of an application or a control, so that after a query keyword is generated, a data tag is generated by using the query keyword and the target elements, and the target elements are replaced by the data tag, thereby displaying and distributing the target elements in the display interface. For display, the data tags may be expanded in the display interface like particles. The user can check more checking keywords through the data tags adjusted in various modes such as sliding, shaking and the like.
In this embodiment, when the user wants to accurately express the search intention, the user may further input the voice data again, so that the data tag is determined to be displayed based on the newly added audio data by continuing to match the query keyword. Therefore, after the query keyword is updated, the data label can be updated, and the data label displayed in the display interface is correspondingly adjusted.
After the search result is displayed in the interface, the user may view the search result in various ways such as sliding, and at this time, in order to reduce the influence on the search result, the data tag may be displayed in a gathered manner on one side of the display interface, for example, suspended at the top. After the data tag is triggered again, the data tag can be unfolded for display.
Example four
On the basis of the above embodiment, the embodiment also discloses an intelligent terminal.
Referring to fig. 8, a structural block diagram of an embodiment of an intelligent terminal according to the present application is shown, which may specifically include the following modules:
this intelligent terminal 800 includes: memory 810, display 820, processor 830, and input unit 840.
The input unit 840 may be used to receive numeric or character information input by a user and a control signal. Specifically, in the embodiment of the present invention, the input unit 840 may include a touch screen 841, which may collect a touch operation of a user (for example, a user's operation on the touch screen 841 by using a finger, a stylus pen, or any other suitable object or accessory) thereon or nearby, and drive the corresponding connection device according to a preset program. Of course, the input unit 840 may include other input devices such as a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a mouse, etc., in addition to the touch screen 841.
The Display 820 includes a Display panel, and optionally, the Display panel may be configured in the form of a Liquid Crystal Display (LCD) or an Organic Light-Emitting Diode (OLED). The touch screen may cover the display panel to form a touch display screen, and when the touch display screen detects a touch operation on or near the touch display screen, the touch display screen transmits the touch operation to the processor 630 to perform corresponding processing.
In an embodiment of the present invention, the processor 830 is configured to receive audio data in the current display interface by calling a software program, and/or a module, and/or data stored in the memory 810, and determine a query keyword according to the audio data; generating a data label by adopting the query keyword and the target element, and displaying the data label; and executing response operation when the trigger to the data tag is received.
Optionally, the current display interface includes: the interface element comprises a target element, and the function button comprises a voice button.
Optionally, the receiving audio data on the current display interface, and determining a query keyword according to the audio data includes: receiving input audio data according to the triggering of the voice button; displaying a data label corresponding to the query keyword; and performing semantic recognition on the text data, and determining corresponding query keywords.
Optionally, performing semantic recognition on the text data, and determining a corresponding query keyword, including: performing semantic recognition on the text data to determine semantic keywords; and determining at least one associated query keyword according to the voice keyword.
Optionally, before displaying the data tag corresponding to the query keyword, the method further includes: and generating a data tag by adopting the query key words and the target elements.
Optionally, the generating the data tag by using the query keyword and the target element includes: and associating each query keyword with a target element respectively, and configuring the query keywords on the associated target elements to generate a data tag.
Optionally, when there is more than one target element, the more than one target element is arranged according to a preset format.
Optionally, displaying the data tag includes: and replacing each target element arranged in a preset format with the data label, and expanding and displaying each data label.
Optionally, the expanding and displaying each data tag includes: randomly configuring the display size of each data label; and diffusing each data label to the current display interface according to the display size for displaying.
Optionally, when the display size exceeds a size threshold, the query keyword is displayed on the data tag.
Optionally, the method further includes: and adjusting the display of each data label in the current display interface according to the input of the user.
Optionally, when receiving the trigger to the data tag, the performing a response operation includes: when the trigger of the data label is received, acquiring a query keyword corresponding to the data label; and executing search according to the query key words and displaying corresponding search results.
Optionally, the method further includes: performing search on the semantic keywords to determine search results; and displaying the search result while displaying the data tag.
Optionally, the method further includes: and when the search result is judged to be viewed, gathering the data label to one side of the current display interface for displaying.
Optionally, the method further includes: and updating the data label according to the newly added audio data.
According to the embodiment of the application, better query experience is provided for the user on the basis of mass data, and the method and the device can be applied to various query scenes, such as shopping query, news query, application query and the like. Taking shopping query as an example, an application or a control with an intelligent shopping guide function can be provided for a user, and based on mass data, the user can interact with the application or the control in a display interface, namely interact through texts, audios, pictures and the like, for example, direct communication through natural language indicates query intention. The application or the control can recognize user semantics based on interaction, namely, the query intention of the user is obtained, so that a corresponding data tag is generated and displayed in the display interface.
In this embodiment, target elements arranged in a certain shape may be displayed in a display interface of an application or a control, so that after a query keyword is generated, a data tag is generated by using the query keyword and the target elements, and the target elements are replaced by the data tag, thereby displaying and distributing the target elements in the display interface. For display, the data tags may be expanded in the display interface like particles. The user can check more checking keywords through the data tags adjusted in various modes such as sliding, shaking and the like.
In this embodiment, when the user wants to accurately express the search intention, the user may further input the voice data again, so that the data tag is determined to be displayed based on the newly added audio data by continuing to match the query keyword. Therefore, after the query keyword is updated, the data label can be updated, and the data label displayed in the display interface is correspondingly adjusted.
After the search result is displayed in the interface, the user may view the search result in various ways such as sliding, and at this time, in order to reduce the influence on the search result, the data tag may be displayed in a gathered manner on one side of the display interface, for example, suspended at the top. After the data tag is triggered again, the data tag can be unfolded for display.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one of skill in the art, embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
In a typical configuration, the computer device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium. Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (fransitory media), such as modulated data signals and carrier waves.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The data processing method, the data processing device and the intelligent terminal provided by the application are introduced in detail, a specific example is applied in the text to explain the principle and the implementation of the application, and the description of the above embodiment is only used for helping to understand the method and the core idea of the application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (29)

1. A data processing method, comprising:
displaying a display interface, wherein the display interface comprises: interface elements, wherein the interface elements comprise target elements arranged according to a preset format;
receiving audio data on a current display interface, and determining query keywords according to the audio data;
generating a data tag by adopting a query keyword and a target element, and displaying the data tag corresponding to the query keyword by replacing the target element;
and executing response operation when the trigger to the data tag is received.
2. The method of claim 1, wherein the currently displayed interface further comprises: a function button comprising a voice button.
3. The method of claim 2, wherein receiving audio data at the current display interface, and determining query keywords from the audio data comprises:
receiving input audio data according to the triggering of the voice button;
identifying the audio data and determining text data;
and performing semantic recognition on the text data, and determining corresponding query keywords.
4. The method of claim 3, wherein semantically recognizing the text data and determining corresponding query keywords comprises:
performing semantic recognition on the text data to determine semantic keywords;
and determining at least one associated query keyword according to the voice keyword.
5. The method of claim 1, wherein generating the data tag using the query keyword and the target element comprises:
and associating each query keyword with a target element respectively, and configuring the query keywords on the associated target elements to generate a data tag.
6. The method of claim 5, wherein when there is more than one target element, the more than one target element is arranged in a preset format.
7. The method of claim 6, wherein displaying the data tag corresponding to the query keyword by replacing the target element comprises:
and replacing each target element arranged in a preset format with the data label, and expanding and displaying each data label.
8. The method of claim 7, wherein the expanding displays each data tag, comprising:
randomly configuring the display size of each data label;
and diffusing each data label to the current display interface according to the display size for displaying.
9. The method of claim 8, wherein a query keyword is displayed on the data tag when the display size exceeds a size threshold.
10. The method of claim 7, further comprising:
and adjusting the display of each data label in the current display interface according to the input of the user.
11. The method of claim 1, wherein performing a response operation upon receiving a trigger to the data tag comprises:
when the trigger of the data label is received, acquiring a query keyword corresponding to the data label;
and executing search according to the query key words and displaying corresponding search results.
12. The method of claim 4, further comprising:
performing search on the semantic keywords to determine search results;
and displaying the search result while displaying the data tag.
13. The method of claim 11 or 12, further comprising:
and when the search result is judged to be viewed, gathering the data label to one side of the current display interface for displaying.
14. The method of claim 1, further comprising:
and updating the data label according to the newly added audio data.
15. A data processing apparatus, comprising:
the keyword determining module is used for displaying a display interface; receiving audio data on a current display interface, and determining query keywords according to the audio data, wherein the current display interface comprises: interface elements, wherein the interface elements comprise target elements arranged according to a preset format;
the tag display module is used for generating a data tag by adopting the query keyword and the target element and displaying the data tag corresponding to the query keyword by replacing the target element;
and the response module is used for executing response operation when receiving the trigger of the data tag.
16. The apparatus of claim 15, wherein the current display interface further comprises: a function button comprising a voice button.
17. The apparatus of claim 16, wherein the keyword determination module comprises:
the audio receiving submodule is used for receiving input audio data according to the triggering of the voice button;
the voice recognition submodule is used for recognizing the audio data and determining text data;
and the semantic recognition submodule is used for performing semantic recognition on the text data and determining corresponding query keywords.
18. The apparatus of claim 17,
the semantic recognition submodule is used for performing semantic recognition on the text data and determining semantic keywords; and determining at least one associated query keyword according to the voice keyword.
19. The apparatus of claim 15, wherein the label display module comprises:
and the generation sub-module is used for associating each query keyword with the target element respectively, and then configuring the query keywords on the associated target elements to generate the data tags.
20. The apparatus of claim 19, wherein when there is more than one target element, the more than one target element is arranged in a preset format.
21. The apparatus of claim 20, wherein the label display module comprises:
and the display submodule is used for replacing each target element arranged in a preset format by the data label and expanding and displaying each data label.
22. The apparatus of claim 21,
the display submodule is used for randomly configuring the display size of each data label; and diffusing each data label to the current display interface according to the display size for displaying.
23. The apparatus of claim 22, wherein a query keyword is displayed on the data tag when the display size exceeds a size threshold.
24. The apparatus of claim 21,
and the display sub-module is also used for adjusting the display of each data label in the current display interface according to the input of the user.
25. The apparatus of claim 15,
the response module is used for acquiring a query keyword corresponding to the data tag when the trigger of the data tag is received; and executing search according to the query key words and displaying corresponding search results.
26. The apparatus of claim 18,
the label display module is also used for searching the semantic keywords and determining a search result; and displaying the search result while displaying the data tag.
27. The apparatus of claim 25 or 26,
and the response module is further used for gathering the data labels to one side of the current display interface for display when judging to view the search result.
28. The apparatus of claim 15,
and the label display module is also used for updating the data label according to the newly added audio data.
29. An intelligent terminal, characterized in that, intelligent terminal includes: memory, display, processor and input unit, wherein, the input unit includes: a touch screen;
the processor is configured to perform the method of any of the preceding claims 1-14.
CN201610055824.8A 2016-01-27 2016-01-27 Data processing method and device and intelligent terminal Active CN107015979B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610055824.8A CN107015979B (en) 2016-01-27 2016-01-27 Data processing method and device and intelligent terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610055824.8A CN107015979B (en) 2016-01-27 2016-01-27 Data processing method and device and intelligent terminal

Publications (2)

Publication Number Publication Date
CN107015979A CN107015979A (en) 2017-08-04
CN107015979B true CN107015979B (en) 2021-04-06

Family

ID=59439245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610055824.8A Active CN107015979B (en) 2016-01-27 2016-01-27 Data processing method and device and intelligent terminal

Country Status (1)

Country Link
CN (1) CN107015979B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109407916A (en) * 2018-08-27 2019-03-01 华为技术有限公司 Method, terminal, user images display interface and the storage medium of data search
CN110020411B (en) * 2019-03-29 2020-10-09 上海掌门科技有限公司 Image-text content generation method and equipment
CN110221747B (en) * 2019-05-21 2022-02-18 掌阅科技股份有限公司 Presentation method of e-book reading page, computing device and computer storage medium
CN110534113B (en) * 2019-08-26 2021-08-24 深圳追一科技有限公司 Audio data desensitization method, device, equipment and storage medium
CN111556371A (en) * 2020-05-20 2020-08-18 维沃移动通信有限公司 Note recording method and electronic equipment
CN112307294B (en) * 2020-11-02 2024-06-25 北京搜狗科技发展有限公司 Data processing method and device
CN116189400A (en) * 2022-12-30 2023-05-30 深圳云天励飞技术股份有限公司 Information pushing method, device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103339597A (en) * 2010-10-30 2013-10-02 布雷克公司 Transforming search engine queries
CN103577524A (en) * 2012-07-30 2014-02-12 Sap股份公司 Business object representations and detail boxes display
CN104781813A (en) * 2012-11-12 2015-07-15 脸谱公司 Grammar model for structured search queries

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101071375A (en) * 2007-05-22 2007-11-14 腾讯科技(深圳)有限公司 Interface development system and control combing method
CN101404035A (en) * 2008-11-21 2009-04-08 北京得意音通技术有限责任公司 Information search method based on text or voice
US8903793B2 (en) * 2009-12-15 2014-12-02 At&T Intellectual Property I, L.P. System and method for speech-based incremental search
US8515984B2 (en) * 2010-11-16 2013-08-20 Microsoft Corporation Extensible search term suggestion engine
CN103268345A (en) * 2013-05-27 2013-08-28 慈文传媒集团股份有限公司 Method and device for retrieving film and television data
US9405855B2 (en) * 2014-03-27 2016-08-02 Sap Ag Processing diff-queries on property graphs
CN104102723B (en) * 2014-07-21 2017-07-25 百度在线网络技术(北京)有限公司 Search for content providing and search engine
CN104281699B (en) * 2014-10-15 2017-11-17 百度在线网络技术(北京)有限公司 Method and device is recommended in search

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103339597A (en) * 2010-10-30 2013-10-02 布雷克公司 Transforming search engine queries
CN103577524A (en) * 2012-07-30 2014-02-12 Sap股份公司 Business object representations and detail boxes display
CN104781813A (en) * 2012-11-12 2015-07-15 脸谱公司 Grammar model for structured search queries

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
对Java图形用户界面美化的几种方法;欧阳桂秀;《电脑知识与技术》;20150831;第11卷(第24期);54-55 *
贸易地图标签绘制的并行解决方案与实现;王树伟 等;《计算机科学》;20090331;第36卷(第3期);273-276 *

Also Published As

Publication number Publication date
CN107015979A (en) 2017-08-04

Similar Documents

Publication Publication Date Title
CN107015979B (en) Data processing method and device and intelligent terminal
US11221819B2 (en) Extendable architecture for augmented reality system
US20170024226A1 (en) Information processing method and electronic device
CN108334371B (en) Method and device for editing object
US10402470B2 (en) Effecting multi-step operations in an application in response to direct manipulation of a selected object
TW201923547A (en) Processing method, device, apparatus, and machine-readable medium
KR102270953B1 (en) Method for display screen in electronic device and the device thereof
CN104281656B (en) The method and apparatus of label information are added in the application
WO2016082598A1 (en) Method, apparatus, and device for rapidly searching for application program
US20140245205A1 (en) Keyboard navigation of user interface
US20160179899A1 (en) Method of providing content and electronic apparatus performing the method
RU2643437C2 (en) Method and apparatus for selecting information
TW201923630A (en) Processing method, device, apparatus, and machine-readable medium
CN105589852B (en) A kind of method and apparatus of information recommendation
CN109388309B (en) Menu display method, device, terminal and storage medium
CN109144285A (en) A kind of input method and device
US10650814B2 (en) Interactive question-answering apparatus and method thereof
Lavid Ben Lulu et al. Functionality-based clustering using short textual description: Helping users to find apps installed on their mobile device
WO2023087934A1 (en) Voice control method, apparatus, device, and computer storage medium
CN109683760B (en) Recent content display method, device, terminal and storage medium
CN105183763A (en) Background realization method and apparatus for search result page
CN114936000A (en) Vehicle-mounted machine interaction method, system, medium and equipment based on picture framework
Łobaziewicz The design of B2B system user interface for mobile systems
US11100180B2 (en) Interaction method and interaction device for search result
US20140181672A1 (en) Information processing method and electronic apparatus

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1240367

Country of ref document: HK

TA01 Transfer of patent application right

Effective date of registration: 20201223

Address after: Room 603, 6 / F, Roche Plaza, 788 Cheung Sha Wan Road, Kowloon, China

Applicant after: Zebra smart travel network (Hong Kong) Limited

Address before: A four-storey 847 mailbox in Grand Cayman Capital Building, British Cayman Islands

Applicant before: Alibaba Group Holding Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant