Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
One of the core concepts of the embodiments of the present application is to provide a data processing method, device and intelligent control unit to improve query efficiency. The method comprises the steps of receiving audio data on a current display interface, determining query keywords corresponding to the audio data on the basis of the data, automatically matching the query keywords required by a user, and displaying data labels corresponding to the query keywords, so that when triggering of the data labels is received, response operation is executed, for example, query is executed, accurate data labels can be automatically matched for the user, and query efficiency is improved.
In this embodiment, the data processing method can be applied to an intelligent terminal, where the intelligent terminal refers to a terminal device with a multimedia function, and the device supports audio, video, data and other functions. In this embodiment, the intelligent terminal has a touch screen, and includes an intelligent mobile terminal such as a smart phone, a tablet computer, and an intelligent wearable device, and may also be a smart television, a personal computer, and other devices having a touch screen.
Example one
Referring to fig. 1, a flowchart illustrating steps of an embodiment of a data processing method according to the present application is shown, which may specifically include the following steps:
and 102, receiving audio data on a current display interface, and determining query keywords according to the audio data.
In this embodiment, the intelligent terminal displays a display interface capable of interacting with the user, where the display interface is capable of performing voice interaction with the user, for example, after receiving audio data input by the user, corresponding interaction information may be fed back, and the interaction information may include various contents such as a text, a voice, a picture, and a data entry. The current display interface thus includes: the device comprises interface elements and function buttons, wherein the function buttons comprise voice buttons and other buttons, such as a writing button, a shooting button and the like, the contents of an information input box, a virtual key and the like can be displayed after the writing button is triggered, and a camera can be called to execute shooting when the shooting button is triggered. The interface elements include elements displayed in the interface, such as display elements provided for aesthetic purposes, and the interface elements include target elements.
Therefore, when a user executes voice input by triggering a voice button or in other modes, the user can correspondingly receive input audio data, then the audio data is identified, and query keywords meeting the requirements of the user are determined. The embodiment is based on mass data, and after the user query intention of the audio data is identified, the query keywords required by the user can be matched in the mass data based on the query intention, so that the query keywords with both accuracy and breadth are provided for the user.
And 104, displaying the data label corresponding to the query keyword.
And 106, executing response operation when the trigger to the data tag is received.
The data tag corresponding to each query keyword may then be determined and then displayed. For example, the text content of the data tag (e.g., the query keyword) is directly displayed in the current interface, and for example, the data tag having the display appearance of a circle, a square, or the like is displayed in the current interface, the data tag corresponding to the query keyword may be displayed in a variety of different manners, which is not limited in this embodiment of the application.
For example, one display mode is to associate a query keyword with a target element in an interface element and generate a corresponding data tag, that is, to associate an existing interface element in an interface with the query keyword, so as to update the existing element in the interface to the data tag, and display the data tag in the interface. The data labels are used for responding to the query keywords, if the query keywords are queried, the data labels are adopted to replace target elements in the current display interface for display, and therefore the data labels corresponding to the data labels are directly displayed.
The data label can be used as a data inlet, a user can search the data label meeting the needs of the user based on the data label, and then the data label is triggered, so that when the trigger of the data label is received, response operation is executed, such as query execution and the like.
In summary, the audio data is received on the current display interface, the query keyword corresponding to the audio data is determined based on the data, the query keyword required by the query can be automatically matched for the user, and then the data tag corresponding to the query keyword is displayed, so that when the trigger to the data tag is received, a response operation, such as query execution, is executed, the accurate data tag can be automatically matched for the user, and the query efficiency is improved.
Example two
Referring to fig. 2, a flowchart illustrating steps of another embodiment of a data processing method according to the present application is shown, which may specifically include the following steps:
step 202, displaying a display interface, wherein target elements in the display interface are arranged according to a preset format.
When the user uses the intelligent terminal, the application or the control can be started to execute the required operation, and the application or the control can display the respective display interface. The display interface comprises interface elements and function buttons, the interface elements comprise target elements, and when more than one target elements exist, the more than one target elements are arranged according to a preset format.
As shown in the schematic diagram of the display interface shown in fig. 3, the display interface includes function buttons such as a write button, a voice button, and a shooting button, which are arranged in sequence, and also includes circular target elements arranged according to a circle. The display interface supports the user to perform operations in various ways such as characters, voice, pictures and the like, and can give a prompt to the user, such as displaying a text content "what can help you? "and of course, the corresponding data such as the prompt voice can also be played.
And step 204, receiving input audio data according to the triggering of the voice button.
Step 206, identifying the audio data and determining text data.
And 208, performing semantic recognition on the text data, and determining corresponding query keywords.
In the display interface, a user can input voice through triggering the voice button, and correspondingly, input audio data can be received according to the triggering of the voice button. And then, the audio data is identified, for example, the audio data is subjected to operations such as feature extraction and matching, and the corresponding text data is identified.
And performing semantic recognition on the text data, namely recognizing the main intention of the text data, and then determining the query keywords related to the intention of the user. Performing semantic recognition on the text data, and determining a corresponding data tag, including: performing semantic recognition on the text data to determine semantic keywords; and determining at least one query keyword according to the voice keywords, wherein each query keyword corresponds to one data tag. Performing semantic recognition on the text data, for example, performing word segmentation and other processing on the text data, matching according to a voice model, a syntax model and the like, recognizing corresponding semantic keywords, that is, keywords meeting the semantics of the text data, and then further matching at least one query keyword with the semantic keywords, where the query keyword is a keyword related to the semantics, such as data for describing the semantic keywords.
For example, the text data corresponding to the audio data of the user is recognized as "i want to buy a lady dress", the corresponding matching query keywords are "lady" and "dress", and a plurality of data tags such as "star identity", "tailing", "advanced customization" and "lotus leaf edge" can be matched based on the query keywords.
Step 210, associating each query keyword with a target element, and configuring the query keywords on the associated target elements to generate data tags.
And 212, replacing each target element arranged in the preset format with the data label, and expanding and displaying each data label.
After the data tags are determined, a display may be made to show the query terms for the user. In this embodiment, each query keyword may be associated with a target element, that is, each query keyword is configured with a corresponding associated target element, and then the query keyword is configured on the target element to generate a corresponding data tag, that is, the data tag is a data entry for displaying the query keyword on the target element. And then, respectively replacing each target element arranged according to the preset format by each data label, so that each target element arranged according to the preset format is expanded and displayed in the current display interface, namely, each data label is displayed and displayed.
For a user, target elements arranged in a preset format in a display interface are expanded, only the expanded target elements are displayed with query keywords, namely the target elements are actually expanded, and therefore data tags with the query keywords are distributed in the display interface.
In this embodiment, the expanding and displaying each data label includes: randomly configuring the display size of each data label; and diffusing each data label to the current display interface according to the display size for displaying.
For more beautiful display, the display size of each data label may be configured randomly, for example, if the data label is a circle, the diameters of different data labels may be configured randomly, and if the data label is a polygon, the size data such as the diagonal lines of different data labels may be configured randomly, so as to generate various data labels with different display sizes. In addition, in this embodiment, the shape of the data tag may be consistent with the shape of the target element, and the shapes of different target elements in the display interface may be the same or different, and may include various shapes such as a circle, a triangle, a pentagon, and the like. In addition, the display size of the data tag may be configured according to other rules, such as the search heat of the query keyword corresponding to the voice keyword.
After the display size of each data label is configured, the data labels of each size can be randomly diffused to the current display interface for display, that is, the data labels are randomly distributed in the display interface in the process of unfolding each data label. The data labels with medium sizes distributed randomly are displayed in the display interface.
In an alternative embodiment of the present application, the query terms are displayed on the data tag when the display size exceeds a size threshold. Because there are a plurality of query keywords and a plurality of correspondingly generated data tags, a part of the data tags can be configured to display the query keywords, and the other part of the data tags does not display the query keywords, and whether the query keywords are displayed or not can be determined based on the display size of the data tags. The present embodiment is therefore configured with a size threshold that is used to determine whether a data tag is displayed.
Therefore, after the display size of each data tag is randomly configured, whether the display size exceeds a size threshold can be determined, if the display size does not exceed the size threshold, the data tag in the display interface does not display the query keyword, and if the display size exceeds the size threshold, the data tag is displayed in the display interface and the query keyword is displayed on the data tag.
In the embodiment of the present application, after displaying the data tag, the user may determine whether there is a required query keyword, and if there is no required query keyword, the user may continue to input audio data to describe the content of the required query (see step 230). Alternatively, the data tag in the current display interface is adjusted by sliding or the like, so as to adjust the query keyword (see step 240).
Step 230 is further included to update the data tag according to the newly added audio data. In the whole query process, a user can input audio data at any time to perform supplementary description on query content, after receiving newly-added audio data, the recognition similar to the audio data can be executed to identify semantic keywords, then the query keywords are matched, updated data tags are generated by adopting the query keywords and target elements, and the data tags in a display interface are updated.
Step 240 is further included for adjusting the display of each data tag in the current display interface based on the user input. When the data label with the query keyword in the current display interface does not have the content required by the user correspondingly, the user can input the data label in the current display interface in a sliding mode, a shaking mode and the like, the user input is correspondingly obtained, and the data such as the display size of the data label is adjusted based on the user input. For example, the display size of the data tag partially displayed with the query keyword becomes small so that the query keyword is not displayed any more. And the display size of part of the data labels which do not display the query keywords is increased to display the corresponding query keywords, so that the query keywords in the display interface are adjusted, various query keywords are provided for the user, the user requirements are met, and the query efficiency is improved.
In another optional embodiment of the present application, the data tag may also be deleted, for example, by clicking, dragging, and the like. For example, pressing the data tag for a long time can display an extended window, and various options such as search, deletion and the like can be displayed in the extended window, so that corresponding operations are performed on the data tag. As shown in fig. 4, an icon of "trash can" may be displayed on an upper portion of the display interface, and after a useless data tab is selected, the useless data tab may be dragged to the icon of "trash can" to be deleted. The remaining data tags may be automatically adjusted after the data tags are deleted.
Step 214, performing a search according to the keywords, and displaying the search result.
In this embodiment, the keywords include query keywords and semantic keywords, and when performing search, the search may be performed in combination with the query keywords and the semantic keywords to determine a search result. Wherein the search includes an automatic search and a search based on user feedback.
In an optional embodiment of the present application, when receiving a trigger to the data tag, performing a response operation includes: when the trigger of the data label is received, acquiring a query keyword corresponding to the data label; and executing search according to the query key words and displaying corresponding search results.
After the data tags are displayed in the display interface, a user can click on the data tags with interest and demands and perform other triggering operations, so as to execute searching, namely searching based on user feedback. Therefore, when the trigger to the data tag is received, the query keyword corresponding to the data tag can be obtained, then the query keyword and the semantic keyword are adopted to carry out combined search, the corresponding search result is obtained, and the search result is displayed in the display interface. For example, if the triggered query keyword is "star-like style", the triggered query keyword is matched with the voice keywords "lady" and "formal dress", and the "lady formal dress with star-like style" can be queried to obtain a corresponding search result. When the search result is displayed, the data tag may be closed, or the search result may be displayed while the data tag is displayed, and for example, the transparency of the data tag is configured, and the data tag is suspended on the search result for display, which is not limited in this embodiment.
In another optional embodiment of the present application, the method further comprises: executing search according to the semantic keywords and determining a search result; and displaying the search result while displaying the data tag.
In the actual process, in order to improve the query efficiency, a part of search results can be displayed while the data tags with the query keywords are displayed for the user, so that the query keywords and the search results are provided for the user, namely, the automatic search is executed. Therefore, after the voice keywords are identified, the semantic keywords can be adopted for searching, and corresponding search results are determined, so that the data tags are displayed and the corresponding search results are displayed at the same time.
As shown in the display interface diagram of fig. 4, a data tag is displayed on the upper half of the display interface, where a part of the data tag includes a query keyword, and a target element may also be currently displayed. And the lower half part of the display interface displays the search results.
And step 216, gathering the data labels to one side of the current display interface for displaying when the search result is judged to be viewed.
After displaying the search results, the user may query the search results, for example, by sliding up or the like to view the search results. Therefore, whether the search result is checked currently can be judged, if the search result is judged to be checked, in order to reduce the interference on the checking of the search result, the data tags can be gathered together, namely, the data tags are gathered to one side of the current display interface to be displayed, the data tags can be gathered to any one of the upper side, the lower side, the left side and the right side, the display size of the data tags can be adjusted to be smaller than the preset size, namely, the query key words are not displayed, and the interference on the checking of the search result is reduced as much as possible.
As shown in FIG. 5, when the search result is viewed, the data tags are gathered to the upper part of the display interface for displaying, so that the interference on the viewing of the search result is reduced.
After the data labels are gathered and displayed so that a user can conveniently check search results, if the user needs to check search results of other keywords, the position of the gathered data labels can be triggered, after the trigger is received, the data labels can be unfolded, the unfolding and displaying process is similar to the process, namely the data labels displaying the query keywords are obtained by adjusting the display size of the data labels, and then the data labels are randomly distributed in the display interface.
According to the embodiment of the application, better query experience is provided for the user on the basis of mass data, and the method and the device can be applied to various query scenes, such as shopping query, news query, application query and the like. Taking shopping query as an example, an application or a control with an intelligent shopping guide function can be provided for a user, and based on mass data, the user can interact with the application or the control in a display interface, namely interact through texts, audios, pictures and the like, for example, direct communication through natural language indicates query intention. The application or the control can recognize user semantics based on interaction, namely, the query intention of the user is obtained, so that a corresponding data tag is generated and displayed in the display interface.
In this embodiment, target elements arranged in a certain shape may be displayed in a display interface of an application or a control, so that after a query keyword is generated, a data tag is generated by using the query keyword and the target elements, and the target elements are replaced by the data tag, thereby displaying and distributing the target elements in the display interface. For display, the data tags may be expanded in the display interface like particles. The user can check more checking keywords through the data tags adjusted in various modes such as sliding, shaking and the like.
In this embodiment, when the user wants to accurately express the search intention, the user may further input the voice data again, so that the data tag is determined to be displayed based on the newly added audio data by continuing to match the query keyword. Therefore, after the query keyword is updated, the data label can be updated, and the data label displayed in the display interface is correspondingly adjusted.
After the search result is displayed in the interface, the user may view the search result in various ways such as sliding, and at this time, in order to reduce the influence on the search result, the data tag may be displayed in a gathered manner on one side of the display interface, for example, suspended at the top. After the data tag is triggered again, the data tag can be unfolded for display.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the embodiments are not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the embodiments. Further, those skilled in the art will also appreciate that the embodiments described in the specification are presently preferred and that no particular act is required of the embodiments of the application.
EXAMPLE III
On the basis of the above embodiments, the present embodiment also provides a data processing apparatus.
Referring to fig. 6, a block diagram of a data processing apparatus according to an embodiment of the present application is shown, which may specifically include the following modules:
the keyword determining module 602 is configured to receive audio data in a current display interface, and determine a query keyword according to the audio data.
And a tag display module 604, configured to display a data tag corresponding to the query keyword.
A response module 606, configured to execute a response operation when the trigger for the data tag is received.
In summary, the audio data is received on the current display interface, the query keyword corresponding to the audio data is determined based on the data, the query keyword required by the query can be automatically matched for the user, and then the data tag corresponding to the query keyword is displayed, so that when the trigger to the data tag is received, a response operation, such as query execution, is executed, the accurate data tag can be automatically matched for the user, and the query efficiency is improved.
Referring to fig. 7, a block diagram of another data processing apparatus according to another embodiment of the present application is shown, which may specifically include the following modules:
and a keyword determining module 702, configured to receive audio data on the current display interface, and determine a query keyword according to the audio data.
And a tag display module 704, configured to display a data tag corresponding to the query keyword.
A response module 706 configured to execute a response operation when the trigger for the data tag is received.
Wherein the current display interface comprises: the interface element comprises a target element, and the function button comprises a voice button.
The keyword determination module 702 includes:
and the audio receiving submodule 7022 is configured to receive input audio data according to the triggering of the voice button.
And the voice recognition sub-module 7024 is configured to recognize the audio data and determine text data.
And the semantic recognition sub-module 7026 is configured to perform semantic recognition on the text data to determine corresponding query keywords.
The semantic identifier module 7026 is configured to perform semantic identification on the text data to determine semantic keywords; and determining at least one associated query keyword according to the voice keyword.
The tag display module 704 is further configured to generate a data tag by using the query keyword and the target element.
The tag display module 704 includes:
the generating sub-module 7042 is configured to associate each query keyword with a target element, and configure the query keyword on the associated target element to generate a data tag.
Wherein, when there is more than one target element, the more than one target elements are arranged according to a preset format.
And the display sub-module 7044 is configured to replace each target element arranged in the preset format with the data tag, and expand and display each data tag.
The display sub-module 7044 is configured to randomly configure the display size of each data tag; and diffusing each data label to the current display interface according to the display size for displaying.
Wherein the query keyword is displayed on the data tag when the display size exceeds a size threshold.
The display sub-module 7044 is further configured to adjust display of each data tag in the current display interface according to user input.
The response module 706 is configured to obtain a query keyword corresponding to the data tag when the trigger for the data tag is received; and executing search according to the query key words and displaying corresponding search results.
The tag display module 704 is further configured to perform a search on the semantic keyword to determine a search result; and displaying the search result while displaying the data tag.
The response module 706 is further configured to gather the data labels to one side of the current display interface for display when determining to view the search result.
The tag display module 704 is further configured to update the data tag according to the newly added audio data.
According to the embodiment of the application, better query experience is provided for the user on the basis of mass data, and the method and the device can be applied to various query scenes, such as shopping query, news query, application query and the like. Taking shopping query as an example, an application or a control with an intelligent shopping guide function can be provided for a user, and based on mass data, the user can interact with the application or the control in a display interface, namely interact through texts, audios, pictures and the like, for example, direct communication through natural language indicates query intention. The application or the control can recognize user semantics based on interaction, namely, the query intention of the user is obtained, so that a corresponding data tag is generated and displayed in the display interface.
In this embodiment, target elements arranged in a certain shape may be displayed in a display interface of an application or a control, so that after a query keyword is generated, a data tag is generated by using the query keyword and the target elements, and the target elements are replaced by the data tag, thereby displaying and distributing the target elements in the display interface. For display, the data tags may be expanded in the display interface like particles. The user can check more checking keywords through the data tags adjusted in various modes such as sliding, shaking and the like.
In this embodiment, when the user wants to accurately express the search intention, the user may further input the voice data again, so that the data tag is determined to be displayed based on the newly added audio data by continuing to match the query keyword. Therefore, after the query keyword is updated, the data label can be updated, and the data label displayed in the display interface is correspondingly adjusted.
After the search result is displayed in the interface, the user may view the search result in various ways such as sliding, and at this time, in order to reduce the influence on the search result, the data tag may be displayed in a gathered manner on one side of the display interface, for example, suspended at the top. After the data tag is triggered again, the data tag can be unfolded for display.
Example four
On the basis of the above embodiment, the embodiment also discloses an intelligent terminal.
Referring to fig. 8, a structural block diagram of an embodiment of an intelligent terminal according to the present application is shown, which may specifically include the following modules:
this intelligent terminal 800 includes: memory 810, display 820, processor 830, and input unit 840.
The input unit 840 may be used to receive numeric or character information input by a user and a control signal. Specifically, in the embodiment of the present invention, the input unit 840 may include a touch screen 841, which may collect a touch operation of a user (for example, a user's operation on the touch screen 841 by using a finger, a stylus pen, or any other suitable object or accessory) thereon or nearby, and drive the corresponding connection device according to a preset program. Of course, the input unit 840 may include other input devices such as a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a mouse, etc., in addition to the touch screen 841.
The Display 820 includes a Display panel, and optionally, the Display panel may be configured in the form of a Liquid Crystal Display (LCD) or an Organic Light-Emitting Diode (OLED). The touch screen may cover the display panel to form a touch display screen, and when the touch display screen detects a touch operation on or near the touch display screen, the touch display screen transmits the touch operation to the processor 630 to perform corresponding processing.
In an embodiment of the present invention, the processor 830 is configured to receive audio data in the current display interface by calling a software program, and/or a module, and/or data stored in the memory 810, and determine a query keyword according to the audio data; generating a data label by adopting the query keyword and the target element, and displaying the data label; and executing response operation when the trigger to the data tag is received.
Optionally, the current display interface includes: the interface element comprises a target element, and the function button comprises a voice button.
Optionally, the receiving audio data on the current display interface, and determining a query keyword according to the audio data includes: receiving input audio data according to the triggering of the voice button; displaying a data label corresponding to the query keyword; and performing semantic recognition on the text data, and determining corresponding query keywords.
Optionally, performing semantic recognition on the text data, and determining a corresponding query keyword, including: performing semantic recognition on the text data to determine semantic keywords; and determining at least one associated query keyword according to the voice keyword.
Optionally, before displaying the data tag corresponding to the query keyword, the method further includes: and generating a data tag by adopting the query key words and the target elements.
Optionally, the generating the data tag by using the query keyword and the target element includes: and associating each query keyword with a target element respectively, and configuring the query keywords on the associated target elements to generate a data tag.
Optionally, when there is more than one target element, the more than one target element is arranged according to a preset format.
Optionally, displaying the data tag includes: and replacing each target element arranged in a preset format with the data label, and expanding and displaying each data label.
Optionally, the expanding and displaying each data tag includes: randomly configuring the display size of each data label; and diffusing each data label to the current display interface according to the display size for displaying.
Optionally, when the display size exceeds a size threshold, the query keyword is displayed on the data tag.
Optionally, the method further includes: and adjusting the display of each data label in the current display interface according to the input of the user.
Optionally, when receiving the trigger to the data tag, the performing a response operation includes: when the trigger of the data label is received, acquiring a query keyword corresponding to the data label; and executing search according to the query key words and displaying corresponding search results.
Optionally, the method further includes: performing search on the semantic keywords to determine search results; and displaying the search result while displaying the data tag.
Optionally, the method further includes: and when the search result is judged to be viewed, gathering the data label to one side of the current display interface for displaying.
Optionally, the method further includes: and updating the data label according to the newly added audio data.
According to the embodiment of the application, better query experience is provided for the user on the basis of mass data, and the method and the device can be applied to various query scenes, such as shopping query, news query, application query and the like. Taking shopping query as an example, an application or a control with an intelligent shopping guide function can be provided for a user, and based on mass data, the user can interact with the application or the control in a display interface, namely interact through texts, audios, pictures and the like, for example, direct communication through natural language indicates query intention. The application or the control can recognize user semantics based on interaction, namely, the query intention of the user is obtained, so that a corresponding data tag is generated and displayed in the display interface.
In this embodiment, target elements arranged in a certain shape may be displayed in a display interface of an application or a control, so that after a query keyword is generated, a data tag is generated by using the query keyword and the target elements, and the target elements are replaced by the data tag, thereby displaying and distributing the target elements in the display interface. For display, the data tags may be expanded in the display interface like particles. The user can check more checking keywords through the data tags adjusted in various modes such as sliding, shaking and the like.
In this embodiment, when the user wants to accurately express the search intention, the user may further input the voice data again, so that the data tag is determined to be displayed based on the newly added audio data by continuing to match the query keyword. Therefore, after the query keyword is updated, the data label can be updated, and the data label displayed in the display interface is correspondingly adjusted.
After the search result is displayed in the interface, the user may view the search result in various ways such as sliding, and at this time, in order to reduce the influence on the search result, the data tag may be displayed in a gathered manner on one side of the display interface, for example, suspended at the top. After the data tag is triggered again, the data tag can be unfolded for display.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one of skill in the art, embodiments of the present application may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
In a typical configuration, the computer device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium. Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (fransitory media), such as modulated data signals and carrier waves.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the true scope of the embodiments of the application.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The data processing method, the data processing device and the intelligent terminal provided by the application are introduced in detail, a specific example is applied in the text to explain the principle and the implementation of the application, and the description of the above embodiment is only used for helping to understand the method and the core idea of the application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.