CN105653154B - Method and equipment for setting label for resource in terminal - Google Patents

Method and equipment for setting label for resource in terminal Download PDF

Info

Publication number
CN105653154B
CN105653154B CN201510992122.8A CN201510992122A CN105653154B CN 105653154 B CN105653154 B CN 105653154B CN 201510992122 A CN201510992122 A CN 201510992122A CN 105653154 B CN105653154 B CN 105653154B
Authority
CN
China
Prior art keywords
resource
information
application
tag
setting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510992122.8A
Other languages
Chinese (zh)
Other versions
CN105653154A (en
Inventor
郭津
冯伟明
钟奇财
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Guangzhou Mobile R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Guangzhou Mobile R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Guangzhou Mobile R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Guangzhou Mobile R&D Center
Priority to CN201510992122.8A priority Critical patent/CN105653154B/en
Publication of CN105653154A publication Critical patent/CN105653154A/en
Application granted granted Critical
Publication of CN105653154B publication Critical patent/CN105653154B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04817Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons

Abstract

A method and device for setting a label for a resource in a terminal are provided. The method comprises the following steps: acquiring application association information related to at least one resource; and setting a label for the at least one resource based on the acquired application association information. According to the method and the equipment, the label can be set for the resource more accurately, and the manual input operation of the user is reduced, so that the user experience is improved.

Description

Method and equipment for setting label for resource in terminal
Technical Field
The present disclosure relates generally to resource processing, and more particularly, to a method and apparatus for setting a tag for a resource in a terminal.
Background
With the development of terminals in terms of processing performance, storage performance, and the like, various resources are required to be processed at the terminals, for example, it is often required to transmit resources at the terminals or process received resources. Particularly, in the case of combining a terminal with a cloud storage technology, the number of resources to be processed is greatly increased.
Various resources can be found by their tags, but the way in which the tags are generated is very limited. Specifically, only manual entry of tags or setting tags for resources based on information of the resource itself (e.g., content, attributes, etc. of the resource) is currently supported. Manually inputting tags can consume a user's time and effort, and especially when a large number of resources need to be tagged, manually inputting tags for resources one by one can substantially reduce user experience. In addition, setting labels for resources according to the information of the resources is often inaccurate and limited, and is difficult to meet user requirements, and often, the user still needs to manually input the labels.
Disclosure of Invention
Various exemplary embodiments of the present disclosure are directed to providing a method and apparatus for setting a tag for a resource in a terminal to solve a problem that a user needs to manually input a tag.
According to an aspect of exemplary embodiments of the present disclosure, there is provided a method of setting a tag for a resource in a terminal. The method comprises the following steps: (A) acquiring application association information related to at least one resource; (B) and setting a label for the at least one resource based on the acquired application association information.
Optionally, in the method, step (a) may further include: acquiring resource content information of the at least one resource; and, in the step (B), a tag may be set for the at least one resource based on the acquired application association information and resource content information.
Optionally, in the method, in step (B), a combination of the obtained application association information and resource content information may be set as a tag of the at least one resource.
Optionally, in the method, step (B) may include: (b1) screening the obtained application associated information based on the obtained resource content information, and/or screening the obtained resource content information based on the obtained application associated information; (b2) and setting a label for the at least one resource based on the screening result.
Optionally, in the method, in step (b1), the obtained application association information and/or resource content information may be filtered according to the relevance to the at least one resource.
Optionally, in the method, in step (b2), the screened application association information may be set as a tag of the at least one resource; or, the screened resource content information may be set as a label of the at least one resource; or, a combination of the resource content information and the screened application association information may be set as a tag of the at least one resource; or, a combination of the application association information and the screened resource content information may be set as a tag of the at least one resource; alternatively, a combination of the screened resource content information and the screened application association information may be set as a tag of the at least one resource.
Optionally, in the method, the application association information may include a plurality of text messages, and in step (B), a label may be set for the at least one resource based on at least one text message having a high correlation with the at least one resource among the plurality of text messages.
Optionally, in the method, in the step (B), a label may be set for the at least one resource based on high-frequency-use text information among the plurality of text information.
Optionally, in the method, in step (B), a candidate tag of the at least one resource may be generated based on the obtained application association information, and the candidate tag selected by the user may be set as the tag of the at least one resource.
Optionally, in the method, the application association information may include at least one of: context information of the at least one resource when present in the application, location information and/or time information of the at least one resource when transmitted in the application, sender information and/or recipient information of the at least one resource when transmitted in the application, name information of the application associated with the at least one resource, and/or title information of the hyperlink.
Optionally, in the method, the at least one resource may be at least one of: the resources which are collected or stored in the terminal, the resources which are collected or stored in the cloud terminal through the terminal, and the resources which are collected or stored in the cloud terminal through the terminal.
Optionally, in the method, the resource content information may include at least one of: content data of the at least one resource, metadata of the at least one resource, attributes of the at least one resource, multimedia content data related to the at least one resource.
Optionally, in the method, the at least one resource may include at least one of: pictures, screenshots, video, audio, documents, hyperlinks.
According to another aspect of exemplary embodiments of the present disclosure, there is provided a method of setting a tag for a resource in a terminal. The method comprises the following steps: (A) acquiring application association information related to at least one resource; (B) generating a candidate tag for the at least one resource based on the obtained application association information.
Optionally, in the method, step (a) may further include: acquiring resource content information of the at least one resource; and, in step (B), a candidate tag for the at least one resource may be generated based on the acquired application association information and resource content information.
Optionally, the method further comprises: (C) the candidate tag for the at least one resource may be set to the tag for the at least one resource.
Optionally, the method further comprises: (D) the candidate tag selected by the user may be set as the tag of the at least one resource.
According to another aspect of exemplary embodiments of the present disclosure, there is provided an apparatus for setting a tag for a resource in a terminal. The apparatus comprises: an information acquisition unit configured to acquire application related information related to at least one resource; a tag setting unit configured to set a tag for the at least one resource based on the acquired application association information.
Optionally, in the apparatus, the information obtaining unit may be further configured to: acquiring resource content information of the at least one resource; and, the tag setting unit is configured to: and setting a label for the at least one resource based on the acquired application association information and resource content information.
Alternatively, in the apparatus, the tag setting unit may be configured to: and setting the combination of the acquired application association information and the resource content information as a label of the at least one resource.
Alternatively, in the apparatus, the label setting unit may include: an information filtering unit configured to: screening the obtained application associated information based on the obtained resource content information, and/or screening the obtained resource content information based on the obtained application associated information; a resource tag setting unit configured to: and setting a label for the at least one resource based on the screening result.
Optionally, in the apparatus, the information filtering unit may be further configured to: and screening the acquired application association information and/or resource content information according to the correlation with the at least one resource.
Optionally, in the device, the resource tag setting unit may be configured to: setting the screened application association information as a label of the at least one resource; or, setting the screened resource content information as a label of the at least one resource; or, the combination of the resource content information and the screened application association information is set as a label of the at least one resource; or, the combination of the application association information and the screened resource content information is set as a label of the at least one resource; or, the combination of the screened resource content information and the screened application association information is set as the label of the at least one resource.
Optionally, in the apparatus, the application association information may include a plurality of text information, and the tag setting unit may be configured to: and setting a label for the at least one resource based on at least one text message with high relevance to the at least one resource in the plurality of text messages.
Alternatively, in the apparatus, the tag setting unit may be configured to: setting a label for the at least one resource based on high frequency usage text information among the plurality of text information.
Alternatively, in the apparatus, the label setting unit may include: a candidate tag generation unit configured to: generating a candidate tag of the at least one resource based on the obtained application association information; a tag determination unit configured to: setting the candidate label selected by the user as the label of the at least one resource.
Optionally, in the device, the application association information may include at least one of: context information of the at least one resource when present in the application, location information and/or time information of the at least one resource when transmitted in the application, sender information and/or recipient information of the at least one resource when transmitted in the application, name information of the application associated with the at least one resource, and/or title information of the hyperlink.
Optionally, in the device, the at least one resource may be at least one of: the resources which are collected or stored in the terminal, the resources which are collected or stored in the cloud terminal through the terminal, and the resources which are collected or stored in the cloud terminal through the terminal.
Optionally, in the device, the resource content information may include at least one of: content data of the at least one resource, metadata of the at least one resource, attributes of the at least one resource, multimedia content data related to the at least one resource.
Optionally, in the device, the at least one resource may include at least one of: pictures, screenshots, video, audio, documents, hyperlinks.
According to another aspect of exemplary embodiments of the present disclosure, there is provided an apparatus for setting a tag for a resource in a terminal. The apparatus comprises: an information acquisition unit configured to: acquiring application association information related to at least one resource; a candidate tag generation unit configured to: generating a candidate tag for the at least one resource based on the obtained application association information.
Optionally, in the apparatus, the information obtaining unit may be further configured to: acquiring resource content information of the at least one resource; and, the candidate tag generating unit may be configured to: and generating a candidate label of the at least one resource based on the acquired application association information and the resource content information.
Optionally, the apparatus may further comprise: a label setting unit configured to: setting the candidate label of the at least one resource as the label of the at least one resource.
Optionally, the apparatus may further comprise: a tag determination unit configured to: setting the candidate label selected by the user as the label of the at least one resource.
According to the method and the device for setting the label for the resource in the terminal, the label can be set for the resource more accurately, manual input operation of a user is reduced, and therefore user experience is improved.
Drawings
The foregoing and other aspects, features and advantages of certain exemplary embodiments of the present disclosure will become more apparent from the following description in conjunction with the accompanying drawings, in which:
fig. 1 is a block diagram illustrating an apparatus for setting a tag for a resource in a terminal according to an exemplary embodiment of the present disclosure;
fig. 2 is a block diagram illustrating a tag setting unit in an apparatus for setting a tag for a resource in a terminal according to an exemplary embodiment of the present disclosure;
fig. 3 is a block diagram illustrating a tag setting unit in an apparatus for setting a tag for a resource in a terminal according to another exemplary embodiment of the present disclosure;
fig. 4 is a flowchart illustrating a method of setting a tag for a resource in a terminal according to an exemplary embodiment of the present disclosure;
fig. 5 is a flowchart illustrating a method of setting a tag for a resource in a terminal according to another exemplary embodiment of the present disclosure;
fig. 6 is a flowchart illustrating a method of setting a tag for a resource in a terminal according to another exemplary embodiment of the present disclosure;
fig. 7 is a flowchart illustrating a method of setting a tag for a resource in a terminal according to another exemplary embodiment of the present disclosure;
fig. 8 is a flowchart illustrating a method of setting a tag for a resource in a terminal based on context information when the resource appears in an application according to an exemplary embodiment of the present disclosure;
fig. 9A to 9C are diagrams illustrating an example of setting a tag for a resource in a terminal based on context information when the resource appears in an application according to an exemplary embodiment of the present disclosure;
fig. 10 is a flowchart illustrating a method of setting a tag for a picture in a terminal based on time information when the picture is transmitted in an application according to an exemplary embodiment of the present disclosure;
fig. 11A to 11C are diagrams illustrating an example of setting a tag for a picture in a terminal based on time information when the picture is transmitted in an application according to an exemplary embodiment of the present disclosure;
fig. 12 is a flowchart illustrating a method of setting a tag for a picture in a terminal based on sender information and receiver information when the picture is transmitted in an application according to an exemplary embodiment of the present disclosure;
fig. 13A to 13C are diagrams illustrating an example of setting a tag for a picture in a terminal based on sender information and receiver information when the picture is transmitted in an application according to an exemplary embodiment of the present disclosure;
fig. 14 is a flowchart illustrating a method of setting a tab for a screen shot in a terminal based on name information of an application associated with a resource according to an exemplary embodiment of the present disclosure;
fig. 15 is a diagram illustrating an example of setting a tab for a screen shot in a terminal based on name information of an application associated with a resource according to an exemplary embodiment of the present disclosure;
FIG. 16 is a flowchart illustrating a method of setting a tag for a screen shot in a terminal based on title information of a hyperlink associated with a resource according to an exemplary embodiment of the present disclosure;
fig. 17A to 17B are diagrams illustrating an example of setting a tag for a screen shot in a terminal based on title information of a hyperlink associated with a resource according to an exemplary embodiment of the present disclosure;
fig. 18 is a flowchart illustrating a method of setting a tag for a picture in a terminal based on context information of the picture when the picture appears in an application and content data of the picture according to an exemplary embodiment of the present disclosure;
fig. 19A to 19D are diagrams illustrating an example of setting a tag for a picture in a terminal based on context information of the picture when the picture appears in an application and content data of the picture according to an exemplary embodiment of the present disclosure;
fig. 20 is a flowchart illustrating a method of setting a tag for a picture in a terminal based on context information of the picture when the picture appears in an application and content data of the picture according to an exemplary embodiment of the present disclosure;
fig. 21A to 21D are diagrams illustrating an example of setting a tag for a picture in a terminal based on context information of the picture when the picture appears in an application and content data of the picture according to an exemplary embodiment of the present disclosure;
fig. 22 is a flowchart illustrating a method of setting a tag for a sound recording in a terminal based on context information of the sound recording when it appears in an application and content data of the sound recording according to an exemplary embodiment of the present disclosure;
fig. 23 is a diagram illustrating an example of setting a tag for a sound recording in a terminal based on context information of the sound recording when it appears in an application and content data of the sound recording according to an exemplary embodiment of the present disclosure;
fig. 24 is a flowchart illustrating a method of setting a tag for a video in a terminal based on context information of when the video appears in an application and subtitle file information of the video according to an exemplary embodiment of the present disclosure;
fig. 25 is a diagram illustrating an example of setting a tag for a video in a terminal based on context information of when the video appears in an application and subtitle file information of the video according to an exemplary embodiment of the present disclosure;
fig. 26 is a flowchart illustrating a method of setting a tag for a phonographic picture in a terminal based on context information of the occurrence of the phonographic picture in an application and subtitle file information of the phonographic picture according to an exemplary embodiment of the present disclosure;
fig. 27A to 27B are diagrams illustrating an example of setting a tag for a phonographic picture in a terminal based on context information of a phonographic picture occurring in an application and subtitle file information of the phonographic picture according to an exemplary embodiment of the present disclosure.
Detailed Description
Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings. However, the present disclosure is not limited or restricted by the exemplary embodiments thereof. Throughout the specification and drawings, the same reference numerals indicate elements that perform the same or similar functions. In the following description, a detailed description of known related functions and configurations may be omitted so as not to unnecessarily obscure the subject matter of the present disclosure. Also, terms described herein are defined with reference to functions of the present disclosure, and the terms may be implemented differently depending on the intention and practice of a user or operator.
It should be understood by those skilled in the art that new embodiments may be created in conjunction with the exemplary embodiments described in the specification. The principles and features of this disclosure may be employed in numerous different embodiments without departing from the scope of the disclosure.
Fig. 1 is a block diagram illustrating an apparatus for setting a tag for a resource in a terminal according to an exemplary embodiment of the present disclosure. Here, the terminal may indicate an electronic device capable of processing various resources, and may include a smart phone, a tablet computer, a laptop computer, a desktop computer, a multimedia player, and the like, as examples.
According to an exemplary embodiment of the present disclosure, the resource may indicate various contents capable of being processed in the terminal, and may include, by way of example, a picture, a screen shot, a video, an audio, a document, a hyperlink, and the like.
These resources may have been or will be collected or stored in the terminal, or they may have been or will be collected or stored in the cloud by the terminal.
Here, the storage is directed to the resource itself, and the collection is directed to the address for acquiring the resource, it should be noted that the above operations are not limited to be performed according to the instruction of the user, and may also be performed automatically during the operation of the terminal, for example, the resource or the address link thereof is automatically acquired during the operation of various applications (chat application, email application, social application, etc.), and is saved in the corresponding application file management directory.
As shown in fig. 1, an apparatus for setting a tag for a resource according to an exemplary embodiment of the present disclosure includes an information acquisition unit 100 and a tag setting unit 200.
As an example, these units may be implemented by a general-purpose hardware processor such as a digital signal processor, a field programmable gate array, or the like, or by a special-purpose hardware processor such as a dedicated chip, or entirely by a computer program in a software manner, for example, as respective modules installed in the relevant applications in the terminal.
Specifically, the information obtaining unit 100 is configured to obtain application association information related to at least one resource. Here, the application association information may indicate information of an application related to the resource, reflecting a correlation of the application that generates or processes the resource.
As an example, the application association information may include: context information of the at least one resource when present in the application, location information and/or time information of the at least one resource when transmitted in the application, sender information and/or recipient information of the at least one resource when transmitted in the application, name information of the application associated with the at least one resource, and/or title information of the hyperlink.
Here, regarding context information when the at least one resource appears in the application, the appearance of the resource in the application means a point in time at which identification information (e.g., name, thumbnail, etc.) of the resource or the resource is presented on a user interface of the application so as to be seen by a user. The context information may not be limited to the context information presented on the user interface, and may include, as an example, all of the context information loaded in the application. Accordingly, when a resource appears in an application, it is possible that such context information is being displayed on the currently displayed user interface, and it is also possible that a sliding screen is required to be displayed due to the limited size of the screen. As an example, the context information may include: text messages located within a preset range near the resource, a predetermined number of text messages (or voice messages convertible into text messages) sent or received before and after the resource, and messages that have been input in the application but have not yet been sent. That is, the context information may indicate textual information in the application that has an association in time, space, or attribute with the resource. For example, in a chat application, the context information may be text information or voice information that is sent or received before and after the point in time when the resource appears. For another example, in a comment-like application (e.g., a bean movie), the contextual information may be a brief, comment, score, etc. located near the resource. As another example, in an application having a search bar, when the resource is a search result, the context information may be a search keyword entered in the search bar.
As for the location information and/or the time information of the at least one resource when transmitted in the application, for example, the location information of the resource when transmitted in the application may be obtained by GPS, AGPS, GPSone, base station positioning, wireless network positioning, or the like, or the location information sent or shared by the same application in a period of time before and after the at least one resource is transmitted in the application (for example, in 30 minutes before and after the time point of transmitting the resource) may be used as the location information of the resource when transmitted in the application. Further, the time information when the at least one resource is transmitted in the application may indicate a time point at which the at least one resource is transmitted (i.e., a time point at which the at least one resource is transmitted/received), which may be obtained in any suitable manner.
Regarding sender information and/or receiver information when the at least one resource is transmitted in the application, the sender information and the receiver information may include, as an example: sender name and recipient name (e.g., username, nickname, remark name, etc.), sender contact and recipient contact (e.g., telephone number, email address, etc.). As an example, the sender information and the receiver information may be acquired by other applications (e.g., a directory application, a cloud backup application) than an application of the transmission resource.
The name information and/or the title information of the hyperlink regarding the application associated with the at least one resource may include, by way of example: an application that acquires a resource (e.g., an application used when a screen image as a resource is intercepted), an application that processes a resource (e.g., an application used when a resource is transmitted, received, saved, collected, or edited). The names of the applications may serve as application association information for the at least one resource. In addition, a specific hyperlink may be opened during the running of the application to access the corresponding page and obtain the at least one resource from the accessed page, in which case, the header information of the hyperlink may be used as the application association information of the at least one resource.
The tag setting unit 200 is configured to set a tag for the at least one resource based on the obtained application association information.
As an example, the tag setting unit 200 may select a basis for setting the tag of the at least one resource directly from the acquired application related information. Specifically, in a case where the application association information includes a plurality of text messages, the tag setting unit 200 may set a tag for the at least one resource based on at least one text message having a high correlation with the at least one resource among the plurality of text messages. According to an exemplary embodiment of the present disclosure, the text information may include at least one of: words, compound words, proper nouns, fixed collocations, idioms, colloquialisms. As an example, the at least one text message with high relevance may directly serve as a label of the at least one resource. Here, the correlation between the text information and the resource may be determined in various ways, and for example, the tag setting unit 200 may set a tag for the at least one resource based on high frequency use text information among the plurality of text information.
As an example, the tag setting unit 200 may further set the tag of the at least one resource device by combining at least a part or all of the acquired application related information with other resource related information.
Specifically, the information obtaining unit 100 may be configured to obtain resource content information of the at least one resource in addition to the application related information, and the tag setting unit 200 may be configured to set a tag for the at least one resource based on the obtained application related information and resource content information.
Here, the resource content information may indicate information related to the content of the resource, reflecting the characteristics of the resource itself.
As an example, the resource content information may include: content data of the at least one resource, metadata of the at least one resource, attributes of the at least one resource, multimedia content data related to the at least one resource.
For example, the content data of the asset may include characters in a picture, screenshot, or video that are recognized using character recognition (e.g., Optical Character Recognition (OCR)) technology. Further, the multimedia data content associated with the asset may include: text information converted from audio related to pictures, screen shots (e.g., a recording saved by a phonographic function), text information converted from a subtitle file for video, a title of a subtitle file for video, and the like. Accordingly, as an example, the resource content information may include a plurality of text information.
As an example, the tag setting unit 200 may be configured to set a combination of the acquired application association information and the resource content information as a tag of the at least one resource. Here, the tag setting unit 200 may acquire the application-related information and the resource content information that can effectively represent the resource according to a predetermined rule, and set a combination thereof as the tag of the resource.
Further, the tag setting unit 200 may perform filtering between the acquired application-related information and the resource content information, and acquire a tag of the resource according to a result of the filtering. Hereinafter, a block diagram of the tag setting unit 200 according to an exemplary embodiment of the present disclosure will be described with reference to fig. 2.
As shown in fig. 2, the tag setting unit 200 may include an information filtering unit 210 and a resource tag setting unit 220.
Specifically, the information filtering unit 210 is configured to filter the obtained application related information based on the obtained resource content information, and/or filter the obtained resource content information based on the obtained application related information. As an example, the information filtering unit 210 may filter the obtained application related information and/or resource content information according to the relevance to the at least one resource, so as to filter out application related information and/or resource content information with higher relevance to the at least one resource. In addition, the information filtering unit 210 may also consider the identification of the filtering result to obtain a label that can better represent the at least one resource.
The resource tag setting unit 220 is configured to set a tag for the at least one resource based on the screening result. Here, the resource tag setting unit 220 may use only the screened information as the tag of the at least one resource, and may also combine the screened information with other information as a screening basis as the tag of the at least one resource.
Specifically, the resource tag setting unit 220 may set the screened application related information as a tag of the at least one resource, or may set the screened resource content information as a tag of the at least one resource, or may set a combination of the resource content information and the screened application related information as a tag of the at least one resource, or may set a combination of the application related information and the screened resource content information as a tag of the at least one resource, or may set a combination of the screened resource content information and the screened application related information as a tag of the at least one resource.
Further, the tag setting unit 200 may first generate candidate tags for selection by the user, and determine a tag of the resource from the candidate tags according to an instruction of the user. Hereinafter, a block diagram of the tag setting unit 200 according to an exemplary embodiment of the present disclosure will be described with reference to fig. 3.
As shown in fig. 3, the tag setting unit 200 may include a candidate tag generation unit 230 and a tag determination unit 240.
Specifically, the candidate tag generating unit 230 is configured to generate a candidate tag of the at least one resource based on the obtained application association information. Here, the candidate tag generation unit 230 may generate one or more tags of the at least one resource as candidate tags in any manner described above.
The tag determining unit 240 is configured to set the candidate tag selected by the user as the tag of the at least one resource. Here, the tag determination unit 240 may provide the generated candidate tag to the user, receive a selection of the candidate tag by the user, and set the candidate tag selected by the user as the tag of the at least one resource.
Through the equipment for setting the label for the resource in the terminal according to the exemplary embodiment of the disclosure, the label can be more accurately set for the resource, and the manual input operation of a user is reduced, so that the user experience is improved.
Fig. 4 is a flowchart illustrating a method of setting a tag for a resource in a terminal according to an exemplary embodiment of the present disclosure. As an example, the method may be implemented by the apparatus as shown in fig. 1 to 3, or may be implemented in software entirely by a computer program.
As shown in fig. 4, in step S100, application association information related to at least one resource is acquired. Here, the application association information may indicate information of an application related to the resource, reflecting a correlation of the application that generates or processes the resource.
As an example, the application association information may include: context information of the at least one resource when present in the application, location information and/or time information of the at least one resource when transmitted in the application, sender information and/or recipient information of the at least one resource when transmitted in the application, name information of the application associated with the at least one resource, and/or title information of the hyperlink.
Here, regarding context information when the at least one resource appears in the application, the appearance of the resource in the application means a point in time at which identification information (e.g., name, thumbnail, etc.) of the resource or the resource is presented on a user interface of the application so as to be seen by a user. The context information may not be limited to the context information presented on the user interface, and may include, as an example, all of the context information loaded in the application. Accordingly, when a resource appears in an application, it is possible that such context information is being displayed on the currently displayed user interface, and it is also possible that a sliding screen is required to be displayed due to the limited size of the screen. As an example, the context information may include: text messages located within a preset range near the resource, a predetermined number of text messages (or voice messages convertible into text messages) sent or received before and after the resource, and messages that have been input in the application but have not yet been sent. That is, the context information may indicate textual information in the application that has an association in time, space, or attribute with the resource. For example, in a chat application, the context information may be text information or voice information that is sent or received before and after the point in time when the resource appears. For another example, in a comment-like application (e.g., a bean movie), the contextual information may be a brief, comment, score, etc. located near the resource. As another example, in an application having a search bar, when the resource is a search result, the context information may be a search keyword entered in the search bar.
As for the location information and/or the time information of the at least one resource when transmitted in the application, for example, the location information of the resource when transmitted in the application may be obtained by GPS, AGPS, GPSone, base station positioning, wireless network positioning, or the like, or the location information sent or shared by the same application in a period of time before and after the at least one resource is transmitted in the application (for example, in 30 minutes before and after the time point of transmitting the resource) may be used as the location information of the resource when transmitted in the application. Furthermore, the time information of when the at least one resource is transmitted in the application may indicate a point in time of transmission of the at least one application, which may be obtained in any suitable manner.
Regarding sender information and/or receiver information when the at least one resource is transmitted in the application, the sender information and the receiver information may include, as an example: sender name and recipient name (e.g., username, nickname, remark name, etc.), sender contact and recipient contact (e.g., telephone number, email address, etc.). As an example, the sender information and the receiver information may be acquired by other applications (e.g., a directory application, a cloud backup application) than an application of the transmission resource.
The name information and/or the title information of the hyperlink regarding the application associated with the at least one resource may include, by way of example: an application that acquires a resource (e.g., an application used when a screen image as a resource is intercepted), an application that processes a resource (e.g., an application used when a resource is transmitted, received, saved, collected, or edited). The names of the applications may serve as application association information for the at least one resource. In addition, a specific hyperlink may be opened during the running of the application to access the corresponding page and obtain the at least one resource from the accessed page, in which case, the header information of the hyperlink may be used as the application association information of the at least one resource.
In step S200, a label is set for the at least one resource based on the obtained application association information.
As an example, in step S200, a basis for setting the label of the at least one resource may be directly selected from the acquired application association information. Specifically, in the case where the application association information includes a plurality of text messages, a label may be set for the at least one resource based on at least one text message having a high correlation with the at least one resource among the plurality of text messages. According to an exemplary embodiment of the present disclosure, the text information may include at least one of: words, compound words, proper nouns, fixed collocations, idioms, colloquialisms. As an example, the at least one text message with high relevance may directly serve as a label of the at least one resource. Here, the correlation between the text information and the resource may be determined in various ways, for example, in step S200, a label may be set for the at least one resource based on the high frequency usage text information among the plurality of text information. As an example, in step S200, the application related information may be searched word by word to count the number of times that the plurality of text messages appear in the application related information, the plurality of text messages may be sorted from most to least according to the number of times of appearance, and a preset number of text messages ranked at the top may be set as the tags of the resources. For example, when the application related information includes the text information "movie", "meeting" and "goodness", and the number of occurrences of the text information "movie" is 5, the number of occurrences of the text information "meeting" is 1, and the number of occurrences of the text information "goodness" is 4, the number of results sorted by the number of occurrences is at least "movie", "goodness", "meeting", and in the case where the preset number is 2, the text information "movie" and "goodness" are set as the tags of the resources, and in the case where the preset number is 1, only the text information "movie" is set as the tag of the resources.
Furthermore, conventional textual information (e.g., "we," "they," "your," "today," "yesterday," "tomorrow," "o," "ya") is generally not suitable for labels set as resources because it has no targeted meaning. For this, optionally, in step S200, conventional text information may be filtered from the plurality of text information, and a label may be set for the at least one resource based on at least one text information having a high correlation with the at least one resource among the filtered text information.
As an example, in step S200, at least a part or all of the obtained application association information may be further combined with other resource related information to set a tag of the at least one resource device. Hereinafter, a flowchart of a method of setting a tag for a resource in a terminal according to an exemplary embodiment of the present disclosure will be described with reference to fig. 5.
Specifically, the method is different from the method shown in fig. 4 in that, in step S100, resource content information of the at least one resource may be acquired in addition to the application association information, and in step S200, a tag may be set for the at least one resource based on the acquired application association information and resource content information (instead of only the application association information).
As an example, the resource content information may include: content data of the at least one resource, metadata of the at least one resource, attributes of the at least one resource, multimedia content data related to the at least one resource.
For example, the content data of the asset may include characters in a picture, screenshot, or video that are recognized using character recognition (e.g., Optical Character Recognition (OCR)) technology. Further, the multimedia data content associated with the asset may include: text information converted from audio related to pictures, screen shots (e.g., a recording saved by a phonographic function), text information converted from a subtitle file for video, a title of a subtitle file for video, and the like. Accordingly, as an example, the resource content information may include a plurality of text information.
As an example, in step S200, a combination of the acquired application association information and resource content information may be set as a tag of the at least one resource. Here, in step S200, the application-related information and the resource content information that can effectively represent the resource may be acquired according to a predetermined rule, and a combination thereof may be set as a tag of the resource.
In the case where the application related information includes a plurality of text information, as an example, in step S200, a combination of at least one text information and resource content information in the application related information may be set as a tag of the at least one resource. For example, when the text information "movie" and "nice look" are included in the application association information, the text information "movie" may be combined with the resource content information, and/or the text information "nice look" may be combined with the resource content information.
Alternatively, step S200 may be executed immediately after step S100, or may be executed at an arbitrary interval after step S100. For example, step S100 may be performed when a resource is collected, and then step S200 may be performed when an instruction to add or edit a tag for the resource is received. For another example, step S100 may be performed when the screenshot action is detected, and then step S200 may be performed when an instruction to transmit the screenshot is received.
Further, step S100 and step S200 may also be performed in different terminals. For example, step S100 may be performed in the first terminal for one or more resources, and thereafter, when the first terminal transmits the one or more resources to the second terminal, the acquired application association information is transmitted to the second terminal as an attribute of the resource, metadata, or as data related to the resource, or the like, so that step S200 can be performed in the second terminal.
In order to enable steps S100 and S200 to be executed at any time interval or in different terminals, for example, in step S100, the obtained application association information may be stored in association with the at least one resource, or may be directly set as an attribute or metadata of the at least one resource.
Further, in step S200, a filtering may be performed between the obtained application-related information and the resource content information, and a tag of the resource may be obtained according to a result of the filtering. Hereinafter, a flowchart of a method of setting a tag for a resource in a terminal according to an exemplary embodiment of the present disclosure will be described with reference to fig. 6.
As shown in fig. 6, in step S100, application association information related to at least one resource is acquired, and resource content information of the at least one resource is acquired. Step S100 shown in fig. 6 can be implemented with reference to the previous description of step S100 shown in fig. 5, and will not be described herein again.
In step S210, the obtained application related information is filtered based on the obtained resource content information, and/or the obtained resource content information is filtered based on the obtained application related information. As an example, in step S210, the obtained application related information and/or resource content information may be filtered according to the relevance to the at least one resource, so as to filter out the application related information and/or resource content information with higher relevance to the at least one resource. In addition, in step S210, the recognition degree of the screening result may be considered to obtain a label that can better represent the at least one resource.
As an example, in step S210, categories of the obtained application related information and/or resource content information may be determined, and filtering may be performed according to the determined categories.
Alternatively, in step S210, a category of the obtained application related information may be determined, and the resource content information may be filtered according to the category, so as to filter out the resource content information not belonging to the category, and/or the category of the obtained resource content information may be determined, and the application related information may be filtered according to the category, so as to filter out the application related information not belonging to the category.
As an example, when there are more than two pieces of application related information and the application related information has different categories, in step S210, the resource content information may be filtered according to the category with the largest number of corresponding application related information, or the resource content information may be filtered according to the category with the largest number of corresponding resource content information, or both of them may be used in combination. For example, when the application association information includes "a", "B", and "C", and the application association information "a" and "B" belong to a first category and the application association information "C" belongs to a second category, the number of application association information (i.e., "a" and "B") corresponding to the first category is 2 and the number of application association information (i.e., "C") corresponding to the second category is 1, in which case the resource content information is filtered by the first category (i.e., the category of the application association information "a" and "B"). For another example, when the application related information includes "a" and "B", and the application related information "a" belongs to the first category and corresponds to the application related information "a" and the resource content information "C" and "D", and the application related information "B" belongs to the second category and corresponds to the resource content information "E", the number of resource content information corresponding to the application related information "a" is 2, and the number of resource content information corresponding to the application related information "B" is 1, in which case the resource content information is screened by the first category (i.e., the category of the application related information "a"). Further, a case where there are more than two resource content information and the resource content information has different categories can be handled in a similar manner.
Alternatively, in step S210, categories of the obtained application related information and resource content information may be determined, and the application related information and the resource content information are filtered according to at least one category (for example, the category with the largest number of the corresponding application related information and resource content information) ranked in the top by the number of the corresponding application related information and resource content information, so as to filter out the application related information and the resource content information that do not belong to the at least one category. For example, when the application related information includes "a" and "B", the resource content information includes "C", and the application related information "a" and the resource content information "C" belong to a first category and the application related information "B" belongs to a second category, the number of the application related information and the resource content information corresponding to the first category is 2, and the number of the application related information and the resource content information corresponding to the second category is 1, in this case, the application related information and the resource content information are filtered according to the first category, with the result that the application related information "B" is filtered out.
As an example, in the case that the application related information and/or the resource content information includes a plurality of text information, in step S210, the filtering may be performed based on the text information in the application related information and/or the resource content information, and/or the filtering may be performed on the text information in the application related information and/or the resource content information. For example, when the text information "conference open" and "breakfast" are included in the application related information, the resource content information "five cereals milling room" may be filtered based on the text information "conference open" and "breakfast" in the application related information. For another example, when the text information "meeting" and "breakfast" are included in the application related information, the text information "meeting" and "breakfast" may be filtered based on the resource content information "five cereals milling room".
In step S220, a label is set for the at least one resource based on the screening result. Here, in step S220, only the screened information may be used as the label of the at least one resource, or the screened information and other information used as the screening basis may be combined into the label of the at least one resource.
Specifically, in step S220, the screened application related information may be set as a tag of the at least one resource, or the screened resource content information may be set as a tag of the at least one resource, or a combination of the resource content information and the screened application related information may be set as a tag of the at least one resource, or a combination of the application related information and the screened resource content information may be set as a tag of the at least one resource, or a combination of the screened resource content information and the screened application related information may be set as a tag of the at least one resource.
In addition, in step S200, candidate tags for selection by the user may be first generated, and tags of resources may be determined from the candidate tags according to the user' S instruction. Hereinafter, a flowchart of a method of setting a tag for a resource in a terminal according to an exemplary embodiment of the present disclosure will be described with reference to fig. 7.
As shown in fig. 7, in step S100, application association information related to at least one resource may be acquired. Step S100 shown in fig. 5 can be implemented by referring to the previous description of step S100 shown in fig. 4, and will not be described again.
Specifically, in step 230, a candidate tag for the at least one resource is generated based on the obtained application association information. Here, at step 230, one or more tags for the at least one resource may be generated as candidate tags in any of the manners described above.
In step S240, the candidate tag selected by the user is set as the tag of the at least one resource. Here, in step S240, the generated candidate tag may be provided to the user, and the selection of the candidate tag by the user is received, so that the candidate tag selected by the user is set as the tag of the at least one resource. As an example, a list of candidate tags may be provided to the user via a corresponding interface, and a tag of the at least one resource may be selected from the candidate tags according to an indication (e.g., a voice indication, a selection operation, etc.) of the user.
It should be noted that step S240 may be executed immediately after step S230, or may be executed after step S230 at an arbitrary interval. For example, step S230 may be performed when a resource is received, followed by step S240 when an instruction to favorite the resource is received. For another example, step S230 may be performed when the resource is saved, and then step S240 may be performed when an instruction to transmit the resource is received.
Further, step S230 and step S240 may be performed in different terminals. For example, step S230 may be performed in the first terminal for one or more resources, and thereafter, when the first terminal transmits the one or more resources to the second terminal, the candidate tag generated in the first terminal is transmitted to the second terminal as an attribute of the resource, metadata, or as data related to the resource, etc., so that step S240 can be performed in the second terminal.
In order to enable steps S230 and S240 to be performed at any interval or in different terminals, for example, the generated candidate tag may be stored in association with the at least one resource in step S230, or may be directly set as an attribute or metadata of the at least one resource, or the like.
It should be understood that the above description has been made based on the case of acquiring only the application related information, but it should be noted that the method of fig. 7 may also be applied to the case of acquiring both the application related information and the resource content information, in which case, in step S230, the candidate tag of the at least one resource is generated based on the acquired application related information and resource content information.
According to the method for setting the label for the resource in the terminal, the label can be set for the resource more accurately, manual input operation of a user is reduced, and therefore user experience is improved.
An example of setting a tag for a picture in a terminal based on context information when the picture appears in an application according to an exemplary embodiment of the present disclosure will be described below with reference to fig. 8 to 9C. It should be noted that although the following detailed description takes an application with chat functionality and pictures appearing in the application as examples, the same applies to other applications and other resources with contextual information.
Fig. 8 is a flowchart illustrating a method of setting a tag for a picture in a terminal based on context information when the picture appears in an application according to an exemplary embodiment of the present disclosure. According to the exemplary embodiment shown in fig. 8, a tag can be set for a resource (e.g., a picture) transmitted in an application when the resource is saved.
As shown in fig. 8, in step S810, an operation of saving a picture in a folder of a terminal is detected. For example, as shown in fig. 9A, a picture appearing in an application having a chat function (e.g., a picture of a movie poster) may be saved in a folder of the terminal. Here, the movie poster may be saved by an application other than the above-described application (for example, a picture processing application).
In step S820, it is determined whether the context messages transmitted or received before and after the picture include text information. Here, it is first determined whether messages that are sent or received at the time closest to the picture each include text information. For example, as shown in fig. 9A, in step S820, it is first determined whether the messages "which movie is good" and "good movie" both include text information.
When it is determined in step S820 that the messages transmitted or received before and after the picture include the text information, the process proceeds to step S830. In step S830, the text information in the messages sent or received before and after the picture is acquired. For example, as shown in fig. 9A, the messages "which movie is good" and "good movie" both include text information, and accordingly, in step S830, the text information in the two messages is acquired.
When it is determined in step S820 that the message transmitted or received before or after the picture does not include text information (for example, in the case where the message is an emoticon, a picture, a video, or the like), the process proceeds to step S840. In step S840, a message is moved forward and/or backward. After that, step S820 is performed again. For example, as shown in fig. 9A, if the message "which movie is good" does not exist but a simple emoticon is transmitted, it is determined whether the message "have many movies recently good" includes text information at step S820.
In step S850, it is determined whether the number of messages including text information reaches a predetermined value. The predetermined value can be preset in the terminal or the application by a terminal manufacturer and an application provider, and can also be preset by a user.
When it is determined in step S850 that the number of messages including text information does not reach the predetermined value, the process proceeds to step S840. Move a message forward and/or backward and perform step S820 again. For example, as shown in fig. 9A, when the predetermined value indicates that 2 messages including text information need to exist before and after the picture, when the text information in the messages "which movie is good" and "how good this movie is" is acquired in step S830, it is determined in step S850 that the number of messages before and after the picture does not reach the predetermined value, and therefore step S820 is executed again after moving one message forward and backward, and it is determined whether the messages "how good a lot of movies have recently been" and "what movie this is" both include text information. Here, when the number of messages does not reach the predetermined value, if there is no other message already in the application, the subsequent steps can be directly performed; alternatively, the restriction on the timing of occurrence of messages may be omitted, for example, in the case where a message including a text message appearing after a picture cannot satisfy a predetermined value but there has been no other subsequent message, text information may be acquired from a message appearing before the picture so as to satisfy the requirement of the predetermined value in the total number.
When it is determined at step S850 that the number of messages including text information has reached a predetermined value (for example, 2 related messages both before and after the picture appears), the process proceeds to step S860.
In step S860, the number of times the acquired text information appears in the corresponding message is counted. For example, when the predetermined values are 2 pieces each before and after, the text information in the message "have a lot of movies good recently", "which movie is good", "the movie is good", and "what movie this part is good" shown in fig. 9A is obtained in the process of repeatedly executing step S830. Accordingly, in step S860, the number of times that the text information appears in the message "how many movies are good recently", "which movie is good", "how good this movie is", and "what movie this is" is counted (for example, the text information "movie" and "a" appears 4 times, and the text information "good see", "this" and "appears 2 times).
In step S870, regular text information is filtered out from the counted text information. For example, in the text information, the text information "o", "this part", and "are included in the regular text information, and are therefore filtered out from the statistical text information.
In step S880, the filtered text information is sorted according to the number of occurrences. For example, the filtered text information (as mentioned above, the text information "o", "this part", and "are filtered out") is sorted from the following by the number of occurrences: "movies" (4 times), "good looks" (2 times), … ….
In step S890, a preset number of text messages ranked at the top are set as the labels of the pictures. For example, the highest-ranked text information "movie" is set as a label for the picture. Here, for example, the label "movie" may be automatically added to a picture when the picture is saved in the picture processing application shown in fig. 9B. Thereafter, when a menu item of a tab for viewing pictures is selected, as shown in fig. 9C, the tab "movie" is displayed on the screen.
An example of setting a tag for a picture in a terminal based on time information when the picture is transmitted in an application according to an exemplary embodiment of the present disclosure will be described below with reference to fig. 10 to 11C. It should be noted that although the following detailed description takes an application with a chat function and pictures transmitted in the application as an example, the same is also applicable to other applications with a transmission function and other resources.
Fig. 10 is a flowchart illustrating a method of setting a tag for a picture in a terminal based on time information when the picture is transmitted in an application according to an exemplary embodiment of the present disclosure. According to the exemplary embodiment shown in fig. 10, it is possible to set a tag for a resource (e.g., a picture) transmitted in an application when the resource is collected.
As shown in fig. 10, in step S1010, a user operation to store a picture in an application on a terminal is detected. For example, as shown in fig. 11A, a picture appearing in an application having a chat function (e.g., a picture of a movie poster) may be collected in an application on the terminal. Here, the movie poster may be collected by another application (for example, a picture processing application) other than the above-described application.
In step S1020, time information (e.g., at least one of date, day of week, time of day when the picture was transmitted/received by the application) when the picture was transmitted in the application is acquired. For example, as shown in fig. 11A, in step S1020, "2015-05-2021: 55" is acquired as time information when pictures are transmitted in the application.
In step S1030, a candidate tag for a picture is generated based on the acquired time information. Here, the time information may be the entire candidate tag of the picture, or the date, day of the week, time, and the like may be extracted from the time information as the candidate tag of the picture. For example, in step S1030, candidate tags "2015-05-2021: 55" for the picture are generated.
In step S1040, a user operation to set a tab for a picture is detected. For example, as shown in fig. 11B, in step S1040, when the user clicks "tap add tab", a user operation of setting a tab for a picture may be detected.
In step S1050, candidate tags of the picture are displayed on the screen of the terminal for selection by the user. For example, as shown in fig. 11C, in step S1050, the candidate tag "2015-05-2021: 55" of the picture is displayed on the screen of the terminal.
In step S1060, when a user operation to select a candidate tag is detected, the candidate tag selected by the user is set as a tag of a picture. For example, in step S1060, in the case where the user selects the candidate tag "2015-05-2021: 55", the candidate tag "2015-05-2021: 55" of the picture is set as the tag of the picture.
An example of setting a tag for a picture in a terminal based on sender information and receiver information when the picture is transmitted in an application according to an exemplary embodiment of the present disclosure will be described below with reference to fig. 12 to 13C. It should be noted that although the following detailed description takes an application with a chat function and pictures transmitted in the application as an example, the same is also applicable to other applications with a transmission function and other resources.
Fig. 12 is a flowchart illustrating a method of setting a tag for a picture in a terminal based on sender information and receiver information when the picture is transmitted in an application according to an exemplary embodiment of the present disclosure. According to the exemplary embodiment shown in fig. 12, it is possible to generate candidate tags for resources (e.g., pictures) transmitted in an application when the resources are transmitted, and then set final tags based on the candidate tags at a receiving side of the resources.
As shown in fig. 12, in step S1210, a user operation of sending a picture in an application on the terminal is detected. For example, as shown in fig. 13A, a picture (e.g., a picture of a movie poster) may be transmitted through an application having a chat function.
In step S1220, sender information and receiver information when the picture is transmitted in the application are acquired. For example, as shown in fig. 13A, in step S1220, the user nickname "small bright" is acquired as sender information when the resource is transmitted in the application, and the user nickname "small bright" is acquired as receiver information when the resource is transmitted in the application.
In step S1230, a candidate tag of a picture is generated based on the acquired sender information and receiver information. For example, in step S1230, the candidate label "highlight" of the picture is generated based on the sender information "highlight", and the candidate label "highlight" of the picture is generated based on the receiver information "highlight".
In step S1240, a user operation to set a tag for a picture is detected. For example, as shown in fig. 13B, in step S1240, when the user clicks "tap add tab", a user operation of setting a tab for a picture may be detected.
In step S1250, candidate tags of the picture are displayed on the screen of the terminal for selection by the user. For example, as shown in fig. 13C, in step S120, candidate labels "small bright" and "small bright" of the picture are displayed on the screen of the terminal.
In step S1260, when a user operation to select a candidate tag is detected, the candidate tag selected by the user is set as a tag of a picture. For example, in step S1260, in the case where the user has selected only the candidate tag "small brightness", only the candidate tag "small brightness" of the picture is set as the tag of the picture.
An example of setting a tab for screen capture in a terminal based on name information of an application associated with a resource according to an exemplary embodiment of the present disclosure will be described below with reference to fig. 14 to 15. It should be noted that, although the following detailed description takes the recommended-class application and the screen shot obtained by the screen-shot operation when the recommended-class application is displayed on the terminal as an example, the same applies to any other application and screen shot obtained by the screen-shot operation when the application is displayed on the terminal.
Fig. 14 is a flowchart illustrating a method of setting a tab for a screen shot in a terminal based on name information of an application associated with a resource according to an exemplary embodiment of the present disclosure. According to the exemplary embodiment shown in fig. 14, it is possible to set a tab for a screen shot obtained by a screen shot operation when the screen is shot.
As shown in fig. 14, in step S1410, a user operation of intercepting a screen on the terminal is detected. For example, as shown in fig. 15, in a case where an interface to apply "mass review" is displayed on the terminal, when a user operation of capturing a screen on the terminal is detected, a screen displayed on the screen is acquired as a screen capture.
In step S1420, the name of the application displayed on the terminal at the time of screen cut is acquired. For example, in step S1420, "popular comment" is acquired as name information of an application displayed on the terminal when the screen is intercepted.
In step S1430, a tab is set for the screen shot based on the acquired name information of the application. For example, in step S1430, the name information "popular comment" of the application is set as a tab of the screen shot.
An example of setting a tag for a screen shot in a terminal based on title information of a hyperlink associated with a resource according to an exemplary embodiment of the present disclosure will be described below with reference to fig. 16 to 17B. It should be noted that although the following detailed description takes an application with a chat function as an example, the following detailed description is also applicable to other applications with a transmission function.
Fig. 16 is a flowchart illustrating a method of setting a tag for a screen shot in a terminal based on title information of a hyperlink associated with a resource according to an exemplary embodiment of the present disclosure. According to the exemplary embodiment shown in fig. 16, it is possible to set a tab for a screen shot obtained by a screen shot operation when the screen is shot.
As shown in fig. 16, in step S1610, a user operation of intercepting a screen on the terminal is detected. For example, as shown in fig. 17A, in the case of obtaining a page displayed on a screen by accessing a hyperlink, when a user operation of intercepting the screen on the terminal is detected, a screen displayed on the screen is acquired as a screen shot.
In step S1620, title information of a hyperlink accessed on the terminal when the screen is intercepted is acquired. For example, in step S1620, "piano PK tournament: track disclosure! "as title information of a hyperlink accessed on the terminal when the screen is intercepted.
In step S1630, candidate tags for screen shots are generated based on the title information of the hyperlink. For example, in step S1630, a screen shot candidate label "piano PK tournament: track disclosure! ".
In step S1640, the candidate tab for screen capture is set as metadata for screen capture. For example, in step S1440, the candidate tag "piano PK tournament: track disclosure! "metadata set to screen shot.
At step S1650, a screenshot with candidate tags as metadata is sent. For example, as shown in fig. 17B, in step S1650, a piano PK tournament having the candidate tag "piano PK tournament: track disclosure! "screen shot.
At step S1660, a tab is set for the screen shot based on the candidate tab for the screen shot at the recipient of the screen shot. For example, at step S1660, at the other terminal as the recipient, the candidate tag "piano PK tournament: track disclosure! "set as the tab of the screen shot.
An example of setting a tag for a picture in a terminal based on context information of the picture when the picture appears in an application and content data of the picture according to an exemplary embodiment of the present disclosure will be described below with reference to fig. 18 to 19D. It should be noted that although the following detailed description takes an application with chat functionality and pictures appearing in the application as examples, the same applies to other applications and other resources with contextual information.
Fig. 18 is a flowchart illustrating a method of setting a tag for a picture in a terminal based on context information of the picture when the picture appears in an application and content data of the picture according to an exemplary embodiment of the present disclosure. According to the exemplary embodiment shown in fig. 18, a label can be set for a resource (e.g., a picture).
As shown in fig. 18, in step S1810, context information when a picture appears in an application is acquired. For example, as shown in fig. 19A, it is possible to obtain "what movie was watched recently", "movie was watched", "movie was not wrong", "is this movie really good? "as context information when the picture appears in the application.
In step S1820, content data of a picture is acquired, where characters in the picture can be recognized through character recognition. For example, as shown in fig. 19B, in step S1820, "revenge alliance 2" and "epoch" are acquired as the content data of the picture.
In step S1830, a combination of the acquired context information and content data is set as a candidate tag of the picture. For example, in step S1830, the text information "movie" with the highest frequency of appearance in the context information is combined with the content data "revenge alliance 2" and "epoch" to generate candidate labels "movie revenge alliance 2" and "movie epoch" of pictures.
In step S1840, a user operation of setting a tab for a picture is detected. For example, as shown in fig. 19C, in step S1840, when the user clicks "tap add tab", a user operation of setting a tab for a picture may be detected.
In step S1850, candidate tags of the picture are displayed on the screen of the terminal for selection by the user. For example, as shown in fig. 19D, in step S1850, the candidate tags "movie revenge alliance 2" and "movie creature epoch" of the picture are displayed on the screen of the terminal.
In step S1860, when a user operation of selecting a candidate tag is detected, the candidate tag selected by the user is set as a tag of a picture. For example, in step S1860, in a case where the user has selected only the candidate tag "movie revenge league 2", only the candidate tag "movie revenge league 2" of the picture is set as the tag of the picture.
An example of setting a tag for a picture in a terminal based on context information of the picture when the picture appears in an application and content data of the picture according to an exemplary embodiment of the present disclosure will be described below with reference to fig. 20 to 21D. It should be noted that although the following detailed description takes an application with chat functionality and pictures appearing in the application as examples, the same applies to other applications and other resources with contextual information.
Fig. 20 is a flowchart illustrating a method of setting a tag for a picture in a terminal based on context information of the picture when the picture appears in an application and content data of the picture according to an exemplary embodiment of the present disclosure. According to the exemplary embodiment shown in fig. 20, a label can be set for a resource (e.g., a picture).
As shown in fig. 20, in step S2010, context information when a picture appears in an application is acquired. For example, as shown in fig. 21A, "many people have a meeting", "the powder is good for drinking and healthy", "the powder can be made breakfast. Why are so many people in a meeting? "," I eat this every day breakfast "as contextual information when pictures appear in the application.
In step S2020, content data of a picture is acquired, where characters in the picture can be recognized by character recognition. For example, as shown in fig. 21B, in step S2020, "eight delicacies from Yi Yuan" and "five cereals milling room" are acquired as content data of a picture.
In step S2030, the acquired context information is filtered based on the acquired content data. For example, in step S2030, based on the content data "yiyuan bazhen" and "five cereals milling room", the contextual information "good person for meeting", "the flour is good for drinking and healthy", "the flour can be made breakfast. Why are so many people in a meeting? "," I eat this every day breakfast "for screening. Specifically, in the context information, the text information "meeting", "powder", "breakfast" and "this" appears 2 times respectively, and the frequency of appearance is the same, wherein the text information "this" belongs to the conventional text information, and is therefore filtered out. In this case, the text information "party", "flour", and "breakfast" is filtered based on the content data "yiyuan bazhen" and "five cereals milling room", and the non-food text information "party" is filtered since the categories of the content data "yiyuan bazhen" and "five cereals milling room" are determined as food.
In step S2040, candidate tags for the picture are generated based on the screening result. For example, in step S2040, content data "yiyuanbazhen", "five cereals milling room" and screened text information "pink", "breakfast" are generated as candidate tags of the picture.
In step S2050, a user operation to set a tab for a picture is detected. For example, as shown in fig. 21C, in step S2050, when the user clicks "tap add tab", a user operation of setting a tab for a picture may be detected.
In step S2060, candidate tags of pictures are displayed on the screen of the terminal for selection by the user. For example, as shown in fig. 21D, in step S2060, the candidate labels "yiyuan bazhen", "five cereals milling room", "flour" and "breakfast" of the picture are displayed on the screen of the terminal.
In step S2070, when a user operation to select a candidate tag is detected, the candidate tag selected by the user is set as a tag of a picture. For example, in step 2070, in the case where the user has selected only the candidate tags "cereal milling room", "breakfast", only the candidate tags "cereal milling room", "breakfast" of the picture are set as the tags of the picture.
An example of setting a tag for a picture in a terminal based on context information of a sound recording when it appears in an application and content data of the sound recording according to an exemplary embodiment of the present disclosure will be described below with reference to fig. 22 and 23. It should be noted that although the following detailed description takes an application with chat functionality and a recording that appears in the application as an example, it is equally applicable to other applications with contextual information and other resources with voice information.
Fig. 22 is a flowchart illustrating a method of setting a tag for a sound recording in a terminal based on context information of the sound recording when it appears in an application and content data of the sound recording according to an exemplary embodiment of the present disclosure. According to the exemplary embodiment shown in fig. 22, a tag can be set for a resource (e.g., a sound recording).
As shown in fig. 22, in step S2210, context information of a sound recording when it appears in an application is acquired. For example, as shown in fig. 23, "q", "at? "," what the movie was just like recently? "so recently having many movies good", "which movie is good", "so this movie is good", "what this movie is good? "as context information when the recording appears in the application.
In step S2220, the recorded content data is acquired, where the recorded content data may be extracted through voice recognition. For example, as shown in fig. 23, in step S2220, "revenge alliance 2" is acquired as the content data of the sound recording.
In step S2230, a combination of the acquired context information and the content data is set as a tag of the audio recording. For example, in step S2230, the text information "movie" with the highest frequency of appearance in the context information is combined with the content data "revenge alliance 2", and the combined "movie revenge alliance 2" is set as a label of the recording.
An example of setting a tag for a picture in a terminal based on context information when a video appears in an application and subtitle file information of the video according to an exemplary embodiment of the present disclosure will be described below with reference to fig. 24 and 25. It should be noted that although the following detailed description takes an application with chat functionality and video that appears in the application as an example, it is equally applicable to other applications with contextual information and other resources with subtitle files.
Fig. 24 is a flowchart illustrating a method of setting a tag for a video in a terminal based on context information of when the video appears in an application and subtitle file information of the video according to an exemplary embodiment of the present disclosure. According to the exemplary embodiment shown in fig. 24, a tag can be set for a resource (e.g., video).
As shown in fig. 24, in step S2410, context information when a video appears in an application is acquired. For example, as shown in fig. 25, "q", "at? "," what the movie was just like recently? "so recently having many movies good", "which movie is good", "so this movie is good", "what this movie is good? "as context information when the video appears in the application.
In step S2420, subtitle file information of the video is acquired, where subtitle file information built in the video file may be acquired and subtitle file information of the video may also be searched for over a network. For example, in step S2420, the title "day officer" of the subtitle file of the video is acquired as the subtitle file information of the video.
In step S2430, the acquired context information, subtitle file information, and a combination of the acquired context information and subtitle file information are set as tags of the video. For example, in step S2430, the text information "movie" with the highest frequency of appearance in the context information is combined with the content data "revenge alliance 2", and the combined "movie revenge alliance 2" is set as a label of the recording.
An example of setting a tag for a picture in a terminal based on context information when a phonographic picture appears in an application and content data of the phonographic picture according to an exemplary embodiment of the present disclosure will be described below with reference to fig. 26 to 27B. It should be noted that although the following detailed description takes an application with chat functionality and a phonographic picture appearing in the application as an example, the same applies to other applications with contextual information and other resources with associated language information.
Fig. 26 is a flowchart illustrating a method of setting a tag for a phonographic picture in a terminal based on context information of when the phonographic picture appears in an application and content data of the phonographic picture according to an exemplary embodiment of the present disclosure. Here, the phonography is a function developed by samsung corporation, and means that sound recording can be performed simultaneously in the course of photographing, or sound recording can be performed within a predetermined period of time after photographing is performed, so as to record the mood, environment, and the like of the user at the time of photographing. When the user opens the picture, the recorded sound is played, and as shown in fig. 27A, the voice "lunch at this point" is recorded as a recording file at the time of shooting, the recording file is stored in association with the picture, and the recording file is transmitted together with the picture when the picture is transmitted. According to the exemplary embodiment shown in fig. 26, a tag can be set for a resource (e.g., a phonographic picture).
As shown in fig. 26, in step S2610, the position information when the phonographic picture is transmitted in the application is acquired, and here, the position information that is transmitted or shared by the same application in a period of time before and after the transmission of the phonographic picture in the application may be used as the position information when the phonographic picture is transmitted in the application. For example, as shown in fig. 27B, "bobturam, guangzhou city," may be acquired as the position information of the phonographic picture when it is transmitted in the application.
In step S2620, content data of the phonographic recording related to the phonographic picture is acquired, where the content data of the phonographic recording can be extracted through voice recognition. For example, in step S2620, "lunch bar at noon and here" is acquired as the content data of the phonographic recording related to the phonographic picture.
In step S2630, a combination of the acquired position information and content data is set as a tag of the phonographic picture. For example, in step S2630, the location information "garta dao in cambodia, guangzhou" is combined with the content data "dining bar at noon" and the combined "garta dao in cambodia, guangzhou" is set as a label of the sound-retaining photograph.
It should be noted that in the above-described method according to an exemplary embodiment, the individual steps of the method may be performed in a different order. For example, the steps mentioned in the successive blocks may be performed simultaneously, or the steps mentioned in the blocks may be performed in the reverse order.
While the present disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present disclosure as defined by the appended claims and their equivalents. The described embodiments are to be considered in all respects only as illustrative and not restrictive. Therefore, the scope of the present disclosure is defined not by the detailed description of the present disclosure but by the claims, and differences within the scope will be understood to be included in the present disclosure.

Claims (9)

1. A method of setting a label for a resource in a terminal, comprising:
(A) acquiring application association information related to at least one resource;
(B) setting a label for the at least one resource based on the acquired application association information,
wherein step (a) further comprises: resource content information of the at least one resource is obtained,
wherein, step (B) comprises: generating a candidate tag of the at least one resource based on the acquired application association information, or generating a candidate tag of the at least one resource based on the acquired application association information and resource content information, and setting a candidate tag selected by a user as a tag of the at least one resource,
wherein, in the step (B), the step of setting a label for the at least one resource based on the obtained application association information includes: setting a label for the at least one resource based on the acquired application association information and resource content information,
wherein the step of setting a label for the at least one resource based on the obtained application association information and resource content information comprises:
(b1) the obtained application related information is filtered based on the obtained resource content information, and/or the obtained resource content information is filtered based on the obtained application related information,
(b2) setting a label for the at least one resource based on the screening result,
wherein the application association information comprises at least one of: context information of the at least one resource as it appears in the application and name information of the application and/or title information of the hyperlink associated with the at least one resource,
wherein the resource content information comprises at least one of: content data of the at least one resource, metadata of the at least one resource, attributes of the at least one resource, multimedia content data related to the at least one resource.
2. The method of claim 1, wherein in step (B), a combination of the acquired application association information and resource content information is set as a tag of the at least one resource.
3. The method of claim 1, wherein in step (b1), the obtained application association information and/or resource content information is filtered according to relevance to the at least one resource.
4. The method of claim 1, wherein in step (b2), the screened application association information is set as a tag of the at least one resource; or, setting the screened resource content information as a label of the at least one resource; or, the combination of the resource content information and the screened application association information is set as a label of the at least one resource; or, the combination of the application association information and the screened resource content information is set as a label of the at least one resource; or, the combination of the screened resource content information and the screened application association information is set as the label of the at least one resource.
5. The method of claim 1, wherein the application association information includes a plurality of text messages, and in step (B), a label is set for the at least one resource based on at least one text message having a high correlation with the at least one resource among the plurality of text messages.
6. The method of claim 5, wherein in step (B), the at least one resource is tagged based on high frequency usage textual information among the plurality of textual information.
7. The method of claim 1, wherein the at least one resource is at least one of: the resources which are collected or stored in the terminal, the resources which are collected or stored in the cloud terminal through the terminal, and the resources which are collected or stored in the cloud terminal through the terminal.
8. The method of claim 1, wherein the at least one resource comprises at least one of: pictures, screenshots, video, audio, documents, hyperlinks.
9. An apparatus for setting a label for a resource in a terminal, comprising:
an information acquisition unit configured to acquire application related information related to at least one resource;
a tag setting unit configured to set a tag for the at least one resource based on the acquired application association information,
wherein the information acquisition unit is further configured to: resource content information of the at least one resource is obtained,
wherein the tag setting unit comprises a candidate tag generation unit configured to generate a candidate tag of the at least one resource based on the acquired application association information or a candidate tag of the at least one resource based on the acquired application association information and the resource content information, and a tag determination unit configured to set a candidate tag selected by a user as the tag of the at least one resource,
wherein the tag setting unit is configured to: setting a label for the at least one resource based on the acquired application association information and resource content information,
wherein the tag setting unit further comprises an information filtering unit configured to filter the obtained application related information based on the obtained resource content information and/or filter the obtained resource content information based on the obtained application related information, and a resource tag setting unit configured to set a tag for the at least one resource based on a filtering result,
wherein the application association information comprises at least one of: context information of the at least one resource as it appears in the application and name information of the application and/or title information of the hyperlink associated with the at least one resource,
wherein the resource content information comprises at least one of: content data of the at least one resource, metadata of the at least one resource, attributes of the at least one resource, multimedia content data related to the at least one resource.
CN201510992122.8A 2015-12-23 2015-12-23 Method and equipment for setting label for resource in terminal Active CN105653154B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510992122.8A CN105653154B (en) 2015-12-23 2015-12-23 Method and equipment for setting label for resource in terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510992122.8A CN105653154B (en) 2015-12-23 2015-12-23 Method and equipment for setting label for resource in terminal

Publications (2)

Publication Number Publication Date
CN105653154A CN105653154A (en) 2016-06-08
CN105653154B true CN105653154B (en) 2020-02-28

Family

ID=56477612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510992122.8A Active CN105653154B (en) 2015-12-23 2015-12-23 Method and equipment for setting label for resource in terminal

Country Status (1)

Country Link
CN (1) CN105653154B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106791442B (en) * 2017-01-20 2019-11-15 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
KR102436018B1 (en) 2018-01-23 2022-08-24 삼성전자주식회사 Electronic apparatus and control method thereof
CN110351183B (en) * 2019-06-03 2021-06-08 创新先进技术有限公司 Resource collection method and device in instant messaging

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101359334A (en) * 2007-07-31 2009-02-04 Lg电子株式会社 Portable terminal and image information managing method therefor
CN102902711A (en) * 2012-08-09 2013-01-30 刘莎 Method and device for generating and applying pragmatic keyword conventional template
CN102929483A (en) * 2012-10-25 2013-02-13 东莞宇龙通信科技有限公司 Terminal and resource sharing method
CN103309925A (en) * 2012-03-13 2013-09-18 三星电子株式会社 Method and apparatus for tagging contents in a portable electronic device
CN104238951A (en) * 2013-06-21 2014-12-24 镇江新晔网络科技有限公司 Net label input prompting device
CN104965921A (en) * 2015-07-10 2015-10-07 陈包容 Information matching method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130046761A1 (en) * 2010-01-08 2013-02-21 Telefonaktiebolaget L M Ericsson (Publ) Method and Apparatus for Social Tagging of Media Files
CN102521214B (en) * 2011-11-19 2017-03-22 上海量明科技发展有限公司 Method and system for adding marker in transmitted document in instant messaging process
US20140160049A1 (en) * 2012-12-10 2014-06-12 Samsung Electronics Co., Ltd. Clipboard function control method and apparatus of electronic device
CN104035995B (en) * 2014-06-11 2018-04-06 小米科技有限责任公司 Group's label generating method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101359334A (en) * 2007-07-31 2009-02-04 Lg电子株式会社 Portable terminal and image information managing method therefor
CN103309925A (en) * 2012-03-13 2013-09-18 三星电子株式会社 Method and apparatus for tagging contents in a portable electronic device
CN102902711A (en) * 2012-08-09 2013-01-30 刘莎 Method and device for generating and applying pragmatic keyword conventional template
CN102929483A (en) * 2012-10-25 2013-02-13 东莞宇龙通信科技有限公司 Terminal and resource sharing method
CN104238951A (en) * 2013-06-21 2014-12-24 镇江新晔网络科技有限公司 Net label input prompting device
CN104965921A (en) * 2015-07-10 2015-10-07 陈包容 Information matching method

Also Published As

Publication number Publication date
CN105653154A (en) 2016-06-08

Similar Documents

Publication Publication Date Title
CN106331778B (en) Video recommendation method and device
JP6415554B2 (en) Nuisance telephone number determination method, apparatus and system
CN108370447B (en) Content processing device, content processing method thereof and server
KR101384931B1 (en) Method, apparatus or system for image processing
CN106101747B (en) A kind of barrage content processing method and application server, user terminal
US8687941B2 (en) Automatic static video summarization
WO2018214772A1 (en) Media data processing method and apparatus, and storage medium
US9805022B2 (en) Generation of topic-based language models for an app search engine
WO2016045465A1 (en) Information presentation method based on input and input method system
WO2016150083A1 (en) Information input method and apparatus
RU2740785C2 (en) Image processing method and equipment, electronic device and graphical user interface
CN108073606B (en) News recommendation method and device for news recommendation
CN102984050A (en) Method, client and system for searching voices in instant messaging
CN105653154B (en) Method and equipment for setting label for resource in terminal
CN106407358B (en) Image searching method and device and mobile terminal
KR20180068113A (en) Apparatus, method for recognizing voice and method of displaying user interface therefor
JP4894253B2 (en) Metadata generating apparatus and metadata generating method
CN104090878B (en) A kind of multimedia lookup method, terminal, server and system
WO2023116785A1 (en) Information display method and apparatus, computer device, and storage medium
WO2019128408A1 (en) Method and device for saving information
CN111767259A (en) Content sharing method and device, readable medium and electronic equipment
JPH11134365A (en) Device and method for information access
US20150302000A1 (en) A method and a technical equipment for analysing message content
CN110555202A (en) method and device for generating abstract broadcast
WO2013019777A1 (en) Contextual based communication method and user interface

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant