WO2023045702A1 - 信息推荐方法及电子设备 - Google Patents

信息推荐方法及电子设备 Download PDF

Info

Publication number
WO2023045702A1
WO2023045702A1 PCT/CN2022/115350 CN2022115350W WO2023045702A1 WO 2023045702 A1 WO2023045702 A1 WO 2023045702A1 CN 2022115350 W CN2022115350 W CN 2022115350W WO 2023045702 A1 WO2023045702 A1 WO 2023045702A1
Authority
WO
WIPO (PCT)
Prior art keywords
interface
application
displayed
response
display area
Prior art date
Application number
PCT/CN2022/115350
Other languages
English (en)
French (fr)
Inventor
毛璐
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Publication of WO2023045702A1 publication Critical patent/WO2023045702A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0483Interaction with page-structured environments, e.g. book metaphor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Definitions

  • the present application relates to the field of terminal equipment, in particular to an information recommendation method and electronic equipment.
  • information of interest to users can be seen everywhere.
  • information of interest may reside in electronic images, paper documents, and the like.
  • the user cannot conveniently use the information of interest.
  • the present application provides an information recommendation method and an electronic device.
  • electronic devices can identify information of interest to users in any form, and recommend services that match the information of interest to users, so as to simplify user operations, enable users to use their information of interest conveniently, and improve user experience.
  • the embodiment of the present application provides an information recommendation method.
  • the method includes: the electronic device displays a first interface in response to the received first operation; wherein, the target image and a first icon are displayed on the first interface, and the first icon is used to indicate that a preset type is recognized in the target image the key information; the first operation includes: a screenshot operation, an operation of viewing an image in a gallery; the electronic device displays a second interface in response to a second operation on the first icon; wherein, the target image is displayed in the second interface, and Annotation of key information; the electronic device displays a third interface in response to a third operation on one of the annotations; wherein, the identification of one or more application programs is displayed on the third interface, and the application program is based on one of the annotations.
  • the information type of the key information is recommended; the electronic device displays the fourth interface in response to the fourth operation of the identification of one of the application programs; wherein, the display interface of one of the application programs is displayed in the fourth interface, and the content of the display interface Related to the key information corresponding to one of the annotations.
  • the electronic device identifies the information of user interest in the image, it annotates the information of user interest in the image, and recommends an application program for the user based on the annotation selected by the user.
  • the electronic device displays the interface of the application program, and the content of the interface is related to the information that the user is interested in. In this way, the service recommendation based on the information of the user's interest in the image is realized, and the user experience is improved.
  • the second operation may be a click operation.
  • the third operation may be a click operation.
  • the first icon may be called an entity recognition result viewing icon.
  • the electronic device displays a recognition result of the icon, that is, displays an annotation of key information of a preset type in the icon.
  • the key information of the preset type can be address information, code information, ID number information, mobile phone number, English text, courier number, website, email address, network disk download link, password (such as Taobao password, Douyin password) information, etc.
  • the address information can be a detailed street address, such as No. XX, XX Street, XX District, XX City, or the name of a scenic spot (such as the name of a famous cultural or natural scenic spot), or a famous building name, or a There are addresses on the map corresponding to the information of interest, and so on.
  • the electronic device displays the first interface in response to the received first operation, including: the electronic device recognizes the target image corresponding to the first operation in response to the received first operation; When there is key information of a preset type in the target image, the first interface is displayed.
  • the target image corresponding to the first operation is a screenshot image
  • the target image corresponding to the first operation is Images viewed in gallery.
  • the electronic device responds to the received first operation and identifies the target image corresponding to the first operation, including: the electronic device responds to the received first operation In one operation, when the target image satisfies the real-time recognition condition, the target image is recognized. In this way, before the electronic device recognizes the target image, it judges whether it meets the real-time recognition conditions, and only when the target image meets the real-time recognition conditions, it performs the recognition operation on the target image, thereby avoiding unnecessary image recognition operation, reducing the power consumption of electronic equipment.
  • the target image meets the real-time recognition condition, including: the application program displayed on the interface meets the first recognition condition when the screenshot operation is performed, and When the user's screenshot sharing habit satisfies the second recognition condition, the target image satisfies the real-time recognition condition; wherein, the first recognition condition is used to indicate the application program that needs recognition, and the second recognition condition is used to indicate the user's operating habits that meet real-time recognition.
  • the electronic device judges whether the target image satisfies the real-time recognition conditions, considering two factors, one is the application program corresponding to the screenshot operation, and the other is the user's operating habits. Only when these two aspects meet the recognition conditions, is it determined that the target image satisfies the real-time recognition conditions, thereby ensuring the accuracy of the result of the electronic device judging whether the target image satisfies the real-time recognition conditions.
  • the first identification condition may be that the application program belongs to a preset application program set. That is, each application program in the preset application set is an application program that requires entity identification.
  • the preset application set may include applications that usually involve entity recognition, such as WPS applications, PPT applications, etc., and the preset application set may not include music applications, taxi applications, and other applications that do not involve entity recognition. .
  • the second recognition condition may be a set of preset user operation habits. That is, each user operation habit in the preset user operation habit set conforms to the user operation habit for real-time entity recognition.
  • the preset set of user operation habits may include the user's screenshot sharing habit of opening a screenshot and then sharing it. The preset set of user operating habits does not include direct sharing without opening screenshots. If the user's screenshot sharing habit is direct sharing without opening the screenshot (such as directly sliding up to share after taking a screenshot), then the user's screenshot sharing habit does not meet the second identification condition.
  • the target image when the first operation is the operation of viewing images in the gallery, the target image meets the real-time recognition conditions, including: when the target image is an image captured by a camera, When the shooting attributes of the target image meet the third recognition condition, the target image meets the real-time recognition condition; when the target image is a screenshot image, when the application program displayed on the interface meets the first recognition condition when the screenshot operation is performed, the target image meets the real-time recognition condition Conditions; wherein, the first recognition condition is used to indicate an application program that requires recognition, and the third recognition condition is used to indicate a camera shooting mode that requires image recognition.
  • the electronic device judges whether the target image meets the real-time recognition conditions, considering the source of the image, whether it is from a screenshot or from a shooting. For images from different sources, different recognition conditions are used to determine whether the target image satisfies the real-time recognition condition, thereby ensuring the accuracy of the result of the electronic device judging whether the target image satisfies the real-time recognition condition.
  • the first identification condition may be that the application program belongs to a preset application program set. That is, each application program in the preset application set is an application program that requires entity identification.
  • the preset application set may include applications that usually involve entity recognition, such as WPS applications, PPT applications, etc., and the preset application set may not include music applications, taxi applications, and other applications that do not involve entity recognition. .
  • the third identification condition may be that the camera shooting classification result belongs to a preset classification set (or it is called that the camera shooting mode belongs to a preset mode set).
  • each camera shooting classification in the preset classification set is a camera shooting classification that requires entity recognition.
  • the preset classification set may include classifications that generally involve entity recognition, such as documents. Classifications that do not involve entity recognition, such as landscapes and portraits, are not included in the preset classification set. Whether it is the camera shooting classification result or the camera shooting mode, it may include a classification label (or mode label) for further classification or identification.
  • the same camera shooting mode may include multiple mode tags, some mode tags indicate that there is an entity recognition requirement, and some mode tags indicate that there is no entity recognition requirement.
  • the electronic device before the electronic device displays the first interface in response to the received first operation, it also displays the first camera shooting interface; in response to the received photographing operation , store the target image obtained by taking pictures in the gallery, and identify the target image.
  • the electronic device displays the first interface in response to the received first operation, including: the electronic device displays the first interface in response to the operation of viewing the target image in the gallery. interface. In this way, in the shooting application scenario, the electronic device will also identify information of interest to the user on the image obtained by taking the photo.
  • the electronic device can identify the information he is interested in, and recommend services that match the information he is interested in, so as to simplify the user's operation and allow the user to conveniently use the information he is interested in.
  • Interest information improves user experience.
  • the electronic device identifying the target image includes: the electronic device identifies the target image when the shooting attribute of the target image satisfies the third identification condition; wherein, The third recognition condition is used to indicate a camera shooting mode that requires image recognition. In this way, before the electronic device recognizes the image obtained through the user's camera operation, it first judges whether the image meets the recognition condition, and only when the image meets the corresponding recognition condition, the electronic device will recognize the image, thereby avoiding Non-essential image recognition operations reduce the power consumption of electronic devices.
  • the electronic device further includes: when the recognition is completed and there is a preset type of key information in the target image, the electronic device displays the first Two-camera shooting interface; wherein, a second icon is also displayed in the second camera shooting interface, and the second icon is used to indicate that the target image has been recognized and key information of a preset type has been recognized in the target image.
  • the electronic device when the image obtained by the user's previous photo operation is recognized and there is key information in the image, the electronic device will display an icon in the camera shooting interface to remind the user that the image taken by the user has been completed. Key information is identified and present in the image, available for the user to view. At this point, the user can view the recognition result of the image.
  • the display of the second icon can prevent the user from feeling lost when viewing the image recognition result prematurely when the image recognition has not been completed, thereby improving the user experience.
  • the second icon may be called a picture entity recognition completed mark, which is used to remind the user that the image captured at the previous moment has been recognized and that a preset type of key information has been recognized in the image.
  • the electronic device responds to the received charging operation, and if there are unrecognized images in the gallery, sequentially identify the unrecognized images; the electronic device In response to the received charging stop operation, if there are unrecognized images in the gallery, the operation of recognizing the unrecognized images is stopped.
  • the electronic device sequentially recognizes the unrecognized images in the gallery, so as to prevent the power consumption generated by the recognized image from affecting the normal use of the electronic device by the user.
  • the electronic device system is upgraded, there will be a large number of unrecognized images in the gallery. Compared with recognizing these images when the user normally uses the electronic device, recognizing these images when the electronic device is charging can be Avoid affecting the normal use of electronic equipment by users.
  • the application logo when an application logo is displayed on the third interface, the application logo is displayed in the form of a floating ball; multiple applications are displayed on the third interface
  • the identification of the program is displayed, the identification of multiple application programs is displayed in the form of a list; the content of the key information corresponding to one of the annotations is also displayed in the list.
  • the electronic device recommends application programs for the user, the identifications of the application programs are displayed in different forms depending on the number of application programs.
  • the electronic device displays the application program identification in the form of a list, it also displays key information content, which can facilitate the user to confirm whether the information on which the recommended application is based is information of interest to the user, thereby ensuring the accuracy of the recommended service.
  • the method may be applied to a folding screen mobile phone, where the folding screen is in an unfolded state and includes a first display area and a second display area.
  • the electronic device displays the first interface in response to the received first operation, including: the electronic device displays the first interface in the first display area in response to the received first operation;
  • the second operation is to display the second interface, including: the electronic device displays the second interface in the first display area in response to the second operation on the first icon; the electronic device displays the third interface in response to the third operation on one of the marks
  • the interface includes: the electronic device displays a third interface in the first display area in response to a third operation on one of the annotations; the electronic device displays a fourth interface in response to a fourth operation on an identification of one of the application programs, including : The electronic device displays the fourth interface in the second display area in response to the fourth operation of the identification of one of the application programs; or, the electronic device responds to the fourth operation of the identification of one of the application programs, and displays
  • the interface includes: the electronic device displays
  • the electronic device displays the first interface in response to the received first operation, including: the electronic device responds to the received first operation, The first interface is displayed in the window; the Ministry of Electronics displays the second interface in response to the second operation on the first icon, including: the electronic device displays the second interface in the first floating window in response to the second operation on the first icon ;
  • the electronic device displays a third interface in response to a third operation on one of the annotations, including: the electronic device displays the third interface in the first floating window in response to a third operation on one of the annotations; the electronic device responds to the third operation on one of the annotations;
  • the fourth operation of identifying one of the application programs to display the fourth interface includes: the electronic device displays the fourth interface in the second floating window in response to the fourth operation of identifying one of the application programs. In this way, the electronic device can display the display interface of the recommendation service in different forms, thereby improving user experience.
  • the second floating window may partially overlap with the first floating window, or may not overlap at all.
  • the folding screen when the electronic device is a mobile phone with a folding screen, the folding screen is in an unfolded state, including a first display area and a second display area.
  • the first floating window when displayed on the first display area, the second floating window may be displayed on the first display area or on the second display area.
  • the method may be applied to a folding screen mobile phone, where the folding screen is in an unfolded state and includes a first display area and a second display area.
  • the display interface of the first application is displayed in the first display area
  • the display interface of the second application is displayed in the second display area.
  • the electronic device displays the first interface in response to the received first operation, including: the electronic device displays the first interface in the first display area in response to the received first operation on the first application;
  • the second operation of the first icon, displaying the second interface includes: the electronic device displays the second interface in the first display area in response to the second operation on the first icon.
  • the method further includes: the electronic device displays a third floating window on the first display area in response to the long press operation and the drag operation on one of the annotations, and the third floating window moves to the second display area; wherein, dragging The operation is directed from the first display area to the second display area, and the key information content corresponding to one of the annotations is displayed in the third floating window; in response to the long press operation and the stop of the drag operation, the corresponding information in the display interface of the second application The key information content is displayed at the editing place.
  • the electronic device is a folding screen mobile phone
  • the user can drag the information of interest identified by the electronic device in one of the display areas to the application displayed in the other display area, which simplifies the user operation. Improved user experience.
  • the content of the display interface is related to the key information corresponding to one of the annotations, including: The link interface corresponding to the key information is displayed on the display interface; when the information type of the key information corresponding to one of the annotations is a character class, the content of the display interface is related to the key information corresponding to one of the annotations, including: The content of the key information corresponding to one of the annotations is displayed in the information editing area of .
  • the service interface displayed by the electronic device after recommending services for the user adopts different display methods depending on the type of information of interest, which can be directly displaying and using the information of interest in the recommended service, or directly jumping to
  • the link interface matched with the information of interest realizes a closed loop from identification of information of interest to service recommendation and improves the user experience.
  • the application program is recommended according to the information type of the key information corresponding to one of the annotations, and the default recommendation rule corresponding to the information type; or, the application program is Recommended according to the information type of the key information corresponding to one of the annotations, and user habits; or, the application is recommended according to the information type of the key information corresponding to one of the annotations, user operations and user portraits.
  • electronic devices recommend services for users, not only based on the information types of identified key information, but also based on the default recommendation rules determined by big data, or based on user habits, or based on perceived user operations and user portraits, Improved the accuracy of recommended apps for electronic devices.
  • the embodiment of the present application provides an information recommendation method.
  • the method is applied to a mobile phone with a folding screen, and the folding screen is in an unfolded state, including a first display area and a second display area.
  • the method includes: the folding screen mobile phone displays a first interface in the first display area; wherein, the first interface includes copyable text; the folding screen mobile phone responds to the copy operation received on the first interface, and displays the A second interface is displayed in the area; wherein, the identification of one or more application programs is displayed in the second interface, and the application programs are recommended according to the type of information corresponding to the copied text; the folding screen mobile phone responds to the identification of one of the application programs
  • the click operation displays the third interface in the half-screen card window of the first display area, or displays the third interface in the second display area; wherein, the display interface of one of the application programs is displayed in the third interface, and the display interface of one of the application programs is displayed in the display area.
  • the copied text is displayed in the corresponding information editing place in the interface.
  • the folding screen mobile phone recommends applications for the user based on the text copied by the user.
  • the folding screen mobile phone displays the interface of the application program, and the copied text is displayed in the information editing place corresponding to the interface.
  • the service recommendation based on the user's interested information in the copied text is realized, and the user's experience is improved.
  • the folding screen mobile phone can display the display interface of the recommendation service in different forms, thereby improving the user experience.
  • the application logo when an application logo is displayed on the second interface, the application logo is displayed in the form of a floating ball; when multiple application logos are displayed on the second interface, the multiple application logos are displayed as Displayed in the form of a list; the list also displays the content of key information corresponding to one of the annotations.
  • the folding screen mobile phone recommends application programs for the user, the identifications of the application programs are displayed in different forms depending on the number of application programs.
  • the folding screen mobile phone displays the application program identification in the form of a list, it also displays key information content, which can facilitate the user to confirm whether the information based on the recommended application is the information of interest to the user, thereby ensuring the accuracy of the recommended service.
  • the folding screen mobile phone displays the second interface in the first display area in response to the copy operation received on the first interface, including: the folding screen mobile phone responds Recognize the copied text based on the copy operation received on the first interface; when the copied text belongs to the key information of the preset type on the folding screen mobile phone, recommend one or more pending applications according to the type of information to which the copied text belongs ;
  • the folding screen mobile phone sends the copied text to the SDK (Software Development Kit, software tool development kit) of each pending application, and receives the confirmation information fed back by the SDK of the pending application; wherein, the confirmation information is used to indicate whether the recommendation is correct;
  • the folding screen mobile phone screens out the application programs to be displayed from one or more undetermined applications, and displays the second interface in the first display area. In this way, the application program recommended by the folding screen mobile phone is confirmed through the second confirmation of the SDK of the corresponding application program, which
  • the application program is recommended based on the information type corresponding to the copied text and the default recommendation rule corresponding to the information type; or, the application program is recommended based on the copied text
  • the corresponding information type and the user's habits are recommended; or, the application is recommended according to the information type corresponding to the copied text, user operation and user portrait.
  • folding screen mobile phones recommend services for users, they not only rely on the information type of the copied text, but also based on the default recommendation rules determined by big data, or based on user habits, or based on perceived user operations and user portraits. Accuracy of recommended apps for electronic devices.
  • the embodiment of the present application provides an information recommendation method.
  • the method includes: the folding screen mobile phone displays a first interface in the first floating window; wherein, the first interface includes text that can be copied; the folding screen mobile phone responds to the copy operation received on the first interface, A second interface is displayed in the window; wherein, the identification of one or more application programs is displayed on the second interface, and the application programs are recommended according to the type of information corresponding to the copied text; the folding screen mobile phone responds to the identification of one of the application programs
  • the click operation displays the third interface in the second floating window; wherein, the display interface of one of the application programs is displayed in the third interface, and the copied text is displayed in the corresponding information editing place in the display interface.
  • the folding screen mobile phone recommends applications for the user based on the text copied by the user.
  • the folding screen mobile phone displays the interface of the application program, and the copied text is displayed in the information editing place corresponding to the interface.
  • the service recommendation based on the user's interested information in the copied text is realized, and the user's experience is improved.
  • the folding screen mobile phone can display the display interface of the recommended service in the form of a floating window, thereby improving the user experience.
  • the second floating window may partially overlap with the first floating window, or may not overlap at all.
  • the method is applied to a mobile phone with a folding screen, and the folding screen is in an unfolded state, including a first display area and a second display area.
  • the first floating window is displayed on the first display area
  • the second floating window may be displayed on the first display area or on the second display area.
  • the application logo when an application logo is displayed on the second interface, the application logo is displayed in the form of a floating ball; when multiple application logos are displayed on the second interface, the multiple application logos are displayed as Displayed in the form of a list; the list also displays the content of key information corresponding to one of the annotations.
  • the folding screen mobile phone recommends application programs for the user, the identifications of the application programs are displayed in different forms depending on the number of application programs.
  • the folding screen mobile phone displays the application program identification in the form of a list, it also displays key information content, which can facilitate the user to confirm whether the information based on the recommended application is the information of interest to the user, thereby ensuring the accuracy of the recommended service.
  • the method is applied to a folding screen mobile phone, and the folding screen is in an unfolded state, including a first display area and a second display area.
  • the folding screen mobile phone responds to the copy operation received on the first interface, and displays the second interface in the first floating window, including: the folding screen mobile phone responds to the copy operation received on the first interface, performing Recognition; when the copied text of the folding screen mobile phone belongs to the key information of the preset type, one or more pending applications are recommended according to the type of information to which the copied text belongs; the folding screen mobile phone sends the copied text to the SDK of each pending application (Software Development Kit, software tool development kit), and receive the confirmation information fed back by the SDK of the undetermined application; wherein, the confirmation information is used to indicate whether the recommendation is correct; the folding screen mobile phone is based on the confirmation information fed back by the SDK of each undetermined application, in one or The application program to be displayed is filtered out from the plurality of pending applications, and the
  • the application program is recommended based on the information type corresponding to the copied text and the default recommendation rule corresponding to the information type; or, the application program is recommended based on the copied text
  • the corresponding information type and the user's habits are recommended; or, the application is recommended according to the information type corresponding to the copied text, user operation and user portrait.
  • folding screen mobile phones recommend services for users, they not only rely on the information type of the copied text, but also based on the default recommendation rules determined by big data, or based on user habits, or based on perceived user operations and user portraits. Accuracy of recommended apps for electronic devices.
  • the embodiment of the present application provides an electronic device.
  • the electronic device includes: one or more processors; memory; and one or more computer programs, wherein the one or more computer programs are stored on the memory, and when executed by the one or more processors, the electronic device Perform the following steps:
  • the electronic device displays a first interface in response to the received first operation; wherein, a target image and a first icon are displayed on the first interface, and the first icon is used to indicate that key information of a preset type is recognized in the target image;
  • the first operation includes: a screenshot operation, an operation of viewing an image in a gallery; the electronic device displays a second interface in response to the second operation on the first icon; wherein, the target image is displayed on the second interface, and key information is displayed.
  • the electronic device displays a third interface in response to a third operation on one of the annotations; wherein, the identification of one or more application programs is displayed on the third interface, and the application program is based on the key information corresponding to one of the annotations
  • the information type is recommended;
  • the electronic device displays a fourth interface in response to the fourth operation on the identification of one of the application programs; wherein, the display interface of one of the application programs is displayed in the fourth interface, and the content of the display interface is consistent with one of the annotations
  • the corresponding key information is related.
  • the electronic device when the computer program is executed by one or more processors, the electronic device is caused to perform the following steps: the electronic device identifies the target image corresponding to the first operation in response to the received first operation; the electronic device When the recognition is completed and there is a preset type of key information in the target image, the first interface is displayed.
  • the electronic device when the computer program is executed by one or more processors, the electronic device is made to perform the following steps: When the image meets the real-time recognition conditions, the target image is recognized.
  • the application displayed on the interface when performing the screenshot operation satisfies the first recognition condition, and the user’s screenshot sharing habit satisfies the second recognition condition
  • the target image satisfies the real-time recognition condition; wherein, the first recognition condition is used to indicate that there is an application program that requires recognition, and the second recognition condition is used to indicate that it meets the user's operating habits for real-time recognition.
  • the target image when the first operation is the operation of viewing an image in the gallery, if the target image is an image captured by a camera, the shooting attribute of the target image satisfies the third identification condition, the target image satisfies the real-time recognition condition; when the first operation is the operation of viewing an image in the gallery, in the case that the target image is a screenshot image, when the application displayed on the interface when performing the screenshot operation meets the first recognition condition, the target The image satisfies the real-time recognition condition; wherein, the first recognition condition is used to indicate an application program that requires recognition, and the third recognition condition is used to indicate a camera shooting mode that requires image recognition.
  • the electronic device when the computer program is executed by one or more processors, the electronic device further performs the following steps: the electronic device displays the first camera shooting interface; the electronic device responds Based on the received photographing operation, the target image obtained by photographing is stored in the gallery, and the target image is recognized; the electronic device displays the first interface in response to the operation of viewing the target image in the gallery.
  • the electronic device when the computer program is executed by one or more processors, the electronic device is made to perform the following steps: the shooting attribute of the target image of the electronic device satisfies the third identification condition , the target image is recognized; wherein, the third recognition condition is used to indicate a camera shooting mode that requires image recognition.
  • the electronic device when the computer program is executed by one or more processors, the electronic device further performs the following steps: the electronic device completes the recognition and there is a preset in the target image When the key information of the type is displayed, the second camera shooting interface is displayed; wherein, the second camera shooting interface also displays a second icon, and the second icon is used to indicate that the target image has been recognized and the preset type is recognized in the target image key information.
  • the electronic device when the computer program is executed by one or more processors, the electronic device further performs the following steps: the electronic device responds to the received charging operation, if the library If there are unrecognized images in the gallery, then recognize the unrecognized images in turn; the electronic device responds to the received charging stop operation, if there are unrecognized images in the gallery, then stop recognizing the unrecognized images operation.
  • the application logo when an application logo is displayed on the third interface, the application logo is displayed in the form of a floating ball; multiple applications are displayed on the third interface
  • the identification of the program is displayed, the identification of multiple application programs is displayed in the form of a list; the content of the key information corresponding to one of the annotations is also displayed in the list.
  • the electronic device is a folding screen mobile phone, and the folding screen is in an unfolded state, including a first display area and a second display area; when the computer program is processed by one or more When the controller is executed, the electronic device is made to perform the following steps: the electronic device displays the first interface in the first display area in response to the received first operation; the electronic device displays the first interface in the first display area in response to the second operation on the first icon The second interface is displayed in the area; the electronic device displays the third interface in the first display area in response to the third operation of one of the annotations; the electronic device responds to the fourth operation of the identification of one of the application programs, in the second The fourth interface is displayed in the display area; or, in response to the fourth operation on the identification of one of the application programs, the electronic device displays the fourth interface in the half-screen card window on the first display area.
  • the electronic device when the computer program is executed by one or more processors, the electronic device is made to perform the following steps: the electronic device responds to the received first operation, The first interface is displayed in a floating window; the electronic device displays the second interface in the first floating window in response to the second operation on the first icon; The third interface is displayed in the window; and the electronic device displays the fourth interface in the second floating window in response to the fourth operation on the identification of one of the application programs.
  • the electronic device is a folding screen mobile phone, and the folding screen is in an unfolded state, including a first display area and a second display area; the first application is displayed in the first display area The display interface of the second application is displayed in the second display area.
  • the computer program is executed by one or more processors, the electronic device is caused to perform the following steps: the electronic device displays the first interface in the first display area in response to the received first operation on the first application; the electronic device responds Based on the second operation on the first icon, the second interface is displayed in the first display area.
  • the electronic device When the computer program is executed by one or more processors, the electronic device further performs the following steps: the electronic device displays a third floating window on the first display area in response to a long press operation and a drag operation on one of the labels, The third floating window moves to the second display area; wherein, the drag operation is directed from the first display area to the second display area, and the key information content corresponding to one of the annotations is displayed in the third floating window; the electronic device responds to the long press The operation and the dragging operation stop, and the key information content is displayed at the corresponding information editing place in the display interface of the second application.
  • the link interface corresponding to the key information is displayed in the display interface; when one of the When the information type of the key information corresponding to the annotation is a character type, the content of the key information corresponding to one of the annotations is displayed at the corresponding information editing place in the display interface.
  • the application program is recommended according to the information type of the key information corresponding to one of the annotations, and the default recommendation rule corresponding to the information type; or, the application program is Recommended according to the information type of the key information corresponding to one of the annotations, and user habits; or, the application is recommended according to the information type of the key information corresponding to one of the annotations, user operations and user portraits.
  • the fourth aspect and any implementation manner of the fourth aspect correspond to the first aspect and any implementation manner of the first aspect respectively.
  • the technical effects corresponding to the fourth aspect and any one of the implementation manners of the fourth aspect refer to the above-mentioned first aspect and the technical effects corresponding to any one of the implementation manners of the first aspect, and details are not repeated here.
  • the embodiment of the present application provides a folding screen mobile phone.
  • the folding screen of the folding mobile phone is in an unfolded state, including a first display area and a second display area;
  • the folding screen mobile phone includes: one or more processors; memory; and one or more computer programs, wherein one or more computers
  • the program is stored in the memory, and when the computer program is executed by one or more processors, the folding screen mobile phone performs the following steps: the folding screen mobile phone displays a first interface in the first display area; the first interface includes reproducible text ;
  • the folding screen mobile phone displays the second interface in the first display area in response to the copy operation received on the first interface; wherein, the identification of one or more application programs is displayed in the second interface, and the application program is copied according to recommended by the type of information corresponding to the text of the text;
  • the folding screen mobile phone responds to a click operation on the logo of one of the application programs, and displays the third interface in the half-screen card window in the first display area, or displays the third interface in the second
  • the application logo when an application logo is displayed on the second interface, the application logo is displayed in the form of a floating ball; when multiple application logos are displayed on the second interface, the multiple application logos are displayed as Displayed in the form of a list; the list also displays the content of key information corresponding to one of the annotations.
  • the folding screen mobile phone when the computer program is executed by one or more processors, the folding screen mobile phone is made to perform the following steps: the folding screen mobile phone responds to receiving Copy operation to identify the copied text; when the copied text belongs to the key information of the preset type, the folding screen mobile phone recommends one or more pending applications according to the type of information to which the copied text belongs; the folding screen mobile phone will copy the text Send to the software tool development kit SDK of each pending application respectively, and receive the confirmation information fed back by the SDK of the pending application; wherein, the confirmation information is used to indicate whether the recommendation is correct; the folding screen mobile phone is based on the confirmation information fed back by the SDK of each pending application, The application program to be displayed is filtered out from the one or more pending applications, and the second interface is displayed in the first display area.
  • the application program is recommended based on the information type corresponding to the copied text and the default recommendation rule corresponding to the information type; or, the application program is recommended based on the copied text
  • the corresponding information type and the user's habits are recommended; or, the application is recommended according to the information type corresponding to the copied text, user operation and user portrait.
  • the fifth aspect and any implementation manner of the fifth aspect correspond to the second aspect and any implementation manner of the second aspect respectively.
  • the technical effects corresponding to the fifth aspect and any one of the implementation manners of the fifth aspect refer to the above-mentioned second aspect and the technical effects corresponding to any one of the implementation manners of the second aspect, which will not be repeated here.
  • the embodiment of the present application provides a folding screen mobile phone.
  • the folding screen mobile phone includes: one or more processors; memory; and one or more computer programs, wherein the one or more computer programs are stored on the memory, and when the computer program is executed by the one or more processors, the foldable
  • the screen mobile phone performs the following steps: the folding screen mobile phone displays the first interface in the first floating window; wherein, the first interface includes text that can be copied; the folding screen mobile phone responds to the copy operation received on the first interface, and in the A second interface is displayed in a floating window; wherein, the identification of one or more application programs is displayed on the second interface, and the application programs are recommended according to the type of information corresponding to the copied text; the folding screen mobile phone responds to one of the application programs The click operation of the logo displays the third interface in the second floating window; wherein, the display interface of one of the application programs is displayed in the third interface, and the copied text is displayed in the corresponding information editing place in the display interface.
  • the application logo when an application logo is displayed on the second interface, the application logo is displayed in the form of a floating ball; when multiple application logos are displayed on the second interface, the multiple application logos are displayed as Displayed in the form of a list; the list also displays the content of key information corresponding to one of the annotations.
  • the folding screen of the folding screen mobile phone is in an unfolded state, including a first display area and a second display area.
  • the folding screen mobile phone performs the following steps: the folding screen mobile phone responds to the copy operation received on the first interface, and recognizes the copied text; When the text belongs to the key information of the preset type, one or more pending applications are recommended according to the type of information to which the copied text belongs; the folding screen mobile phone sends the copied text to the SDK of each pending application, and receives the SDK feedback of the pending application The confirmation information; wherein, the confirmation information is used to indicate whether the recommendation is correct; the folding screen mobile phone screens out the application to be displayed from one or more pending applications according to the confirmation information fed back by the SDK of each pending application, and in the first floating window to display the second interface.
  • the application program is recommended based on the information type corresponding to the copied text and the default recommendation rule corresponding to the information type; or, the application program is recommended based on the copied text
  • the corresponding information type and the user's habits are recommended; or, the application is recommended according to the information type corresponding to the copied text, user operation and user portrait.
  • the sixth aspect and any implementation manner of the sixth aspect correspond to the third aspect and any implementation manner of the third aspect respectively.
  • the technical effects corresponding to the sixth aspect and any one of the implementation manners of the sixth aspect refer to the above-mentioned third aspect and the technical effects corresponding to any one of the implementation manners of the third aspect, and details are not repeated here.
  • the embodiment of the present application provides a computer-readable storage medium.
  • the computer-readable storage medium includes a computer program, and when the computer program is run on the electronic device, the electronic device is made to execute the first aspect and the information recommendation method of any one of the first aspect.
  • the computer-readable storage medium includes a computer program, and when the computer program is run on the folding screen mobile phone, the folding screen mobile phone is made to execute the information recommendation method of any one of the second aspect and the second aspect, or the folding screen mobile phone is made to execute The third aspect and the information recommendation method of any one of the third aspect.
  • the seventh aspect and any one of the realization ways of the seventh aspect correspond to the first aspect and any one of the realization ways of the first aspect respectively, or respectively correspond to the second aspect and any one of the realization ways of the second aspect, Or respectively correspond to the third aspect and any implementation manner of the third aspect.
  • the technical effect corresponding to the seventh aspect and any implementation manner of the seventh aspect please refer to the above-mentioned first aspect and the technical effect corresponding to any implementation manner of the first aspect, or refer to the above-mentioned second aspect
  • the technical effect corresponding to any one of the implementation manners of the second aspect or refer to the above third aspect and the technical effect corresponding to any one of the implementation manners of the third aspect, which will not be repeated here.
  • FIG. 1 is a schematic diagram of an exemplary application scenario
  • FIG. 2 is a schematic structural diagram of an exemplary electronic device
  • FIG. 3 is a schematic diagram of a software structure of an exemplary electronic device
  • FIG. 4 is a schematic diagram of module interaction provided by the embodiment of the present application.
  • Fig. 5 is one of schematic diagrams of exemplary application scenarios shown
  • Fig. 6 is one of schematic diagrams of exemplary application scenarios shown
  • Fig. 7 is one of schematic diagrams of exemplary application scenarios shown.
  • FIG. 8 is a schematic diagram of module interaction provided by the embodiment of the present application.
  • Fig. 9 is one of schematic diagrams of exemplary application scenarios.
  • FIG. 10 is a schematic diagram of module interaction provided by the embodiment of the present application.
  • FIG. 11 is a schematic diagram of the judgment flow provided by the embodiment of the present application.
  • Fig. 12 is a schematic diagram of module interaction provided by the embodiment of the present application.
  • Fig. 13 is one of schematic diagrams of exemplary application scenarios
  • FIG. 14 is a schematic diagram of module interaction provided by the embodiment of the present application.
  • Fig. 15 is one of schematic diagrams of exemplary application scenarios
  • Fig. 16 is one of schematic diagrams of exemplary application scenarios
  • FIG. 17 is a schematic diagram of an exemplary window display
  • Fig. 18 is one of schematic diagrams of exemplary application scenarios
  • Fig. 19 is one of schematic diagrams of exemplary application scenarios
  • Fig. 20 is one of schematic diagrams of exemplary application scenarios
  • Fig. 21a to Fig. 21c are one of schematic diagrams of exemplary application scenarios.
  • first and second in the description and claims of the embodiments of the present application are used to distinguish different objects, rather than to describe a specific order of objects.
  • first target object, the second target object, etc. are used to distinguish different target objects, rather than describing a specific order of the target objects.
  • words such as “exemplary” or “for example” are used as examples, illustrations or illustrations. Any embodiment or design scheme described as “exemplary” or “for example” in the embodiments of the present application shall not be interpreted as being more preferred or more advantageous than other embodiments or design schemes. Rather, the use of words such as “exemplary” or “such as” is intended to present related concepts in a concrete manner.
  • multiple processing units refer to two or more processing units; multiple systems refer to two or more systems.
  • FIG. 1 is a schematic diagram of an exemplary application scenario.
  • the user opens the mobile phone gallery to view pictures, referring to FIG. 1(1), there may be information of interest to the user in the picture 101, such as text 1011 and QR code 1012 in the picture 101.
  • the user takes a screenshot of the display interface of the mobile phone, referring to FIG. 1(2), there may be information of the user's interest in the screenshot preview 102 , such as text 1021 in the screenshot preview 102 .
  • the text 1021 that the user is interested in may be tracking number information, address information, and the like.
  • the entity 1031 collected in the mobile phone photography collection interface 103 may contain the user's interest information, which may be text information or code information. Taking the entity 1031 as an ID card as an example, the information that the user is interested in may be ID number or address information.
  • the text selected by the user in the browsing interface 104 may be information of interest to the user, and these texts can usually be directly copied .
  • FIG. 1 various display interfaces of tablet phones are used as examples to illustrate schematic diagrams of application scenarios, and these schematic diagrams of application scenarios are also applicable to folding screen mobile phones, tablets, etc., and this application does not limit this.
  • FIG. 2 is a schematic structural diagram of the electronic device 100 .
  • the electronic device 100 may be a terminal, also referred to as a terminal device, and the terminal may be a cellular phone (including a tablet cellular phone and a folding screen cellular phone) or a tablet computer (pad) with a camera.
  • the equipment is not limited in this application.
  • the structural diagram of the electronic device 100 may be applicable to the straight mobile phone shown in FIG. 1 , and may also be applicable to foldable screen mobile phones and tablets.
  • the electronic device 100 shown in FIG. 2 is only an example of an electronic device, and the electronic device 100 may have more or fewer components than those shown in the figure, and two or more components may be combined , or can have different component configurations.
  • the various components shown in Figure 2 may be implemented in hardware, software, or a combination of hardware and software including one or more signal processing and/or application specific integrated circuits.
  • the electronic device 100 may include: a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2.
  • Mobile communication module 150 wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display screen 194, And a subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, and the like.
  • the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU) wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit, NPU
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory.
  • the USB interface 130 is an interface conforming to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100 , and can also be used to transmit data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through them. This interface can also be used to connect other electronic devices, such as AR devices.
  • the charging management module 140 is configured to receive a charging input from a charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 can receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 is charging the battery 142 , it can also provide power for electronic devices through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 .
  • the wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. applied on the electronic device 100.
  • WLAN wireless local area networks
  • System global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the electronic device 100 realizes the display function through the GPU, the display screen 194 , and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel may be a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (organic light-emitting diode, OLED) or the like.
  • the electronic device 100 may include 1 or N display screens 194 , where N is a positive integer greater than 1.
  • the electronic device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display screen 194 and the application processor.
  • the ISP is used for processing the data fed back by the camera 193 .
  • the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin color.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos in various encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG moving picture experts group
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving music, video and other files in the external memory card.
  • the internal memory 121 may be used to store computer-executable program codes including instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 , so that the electronic device 100 implements the information recommendation method in this application.
  • the internal memory 121 may include an area for storing programs and an area for storing data.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
  • the storage data area can store data created during the use of the electronic device 100 (such as audio data, phonebook, etc.) and the like.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the electronic device 100 can implement audio functions through the audio module 170 , the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • the audio module 170 may also be used to encode and decode audio signals.
  • the audio module 170 may be set in the processor 110 , or some functional modules of the audio module 170 may be set in the processor 110 .
  • the pressure sensor is used to sense the pressure signal and convert the pressure signal into an electrical signal.
  • a pressure sensor may be located on the display screen 194 .
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor.
  • the electronic device 100 may also calculate the touched position according to the detection signal of the pressure sensor.
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view short messages is executed. When a touch operation whose intensity is greater than or equal to the first pressure threshold acts on the icon of the short message application, the instruction of creating a new short message is executed.
  • the keys 190 include a power key, a volume key and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100 .
  • the motor 191 can generate a vibrating reminder.
  • the motor 191 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback.
  • touch operations applied to different applications may correspond to different vibration feedback effects.
  • the indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the software system of the electronic device 100 may adopt a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture.
  • the embodiment of the present application takes the Android system with a layered architecture as an example to illustrate the software structure of the electronic device 100 .
  • FIG. 3 is a block diagram of the software structure of the electronic device 100 according to the embodiment of the present application.
  • the layered architecture of the electronic device 100 divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
  • the Android system is divided into three layers, which are application program layer, application program framework layer, and kernel layer from top to bottom.
  • the application layer can consist of a series of application packages.
  • the application package may include applications such as camera, gallery, map, browser, translation, shopping, short message, memo, entity recognition application, and calculation engine application.
  • the entity recognition application is used to realize the recognition of text information, code information and other information, and to display the entity recognition results and the recommendation results of the associated applications corresponding to the entity recognition results;
  • the calculation engine application is used to perform user perception based on the information recognition results. Entity recognition of interest, and recommendation of associated applications based on entity recognition results.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include window manager, perception service, application operation management service, content provider, view system, phone manager, resource manager, notification manager and so on.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, etc.
  • the perception service is used to perceive the application life cycle and monitor user operations, such as copying text to the clipboard, viewing pictures, and taking screenshots.
  • the application operation management service is used to realize the operation management of each application in the application layer.
  • Content providers are used to store and retrieve data and make it accessible to applications.
  • Said data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebook, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on.
  • the view system can be used to build applications.
  • a display interface can consist of one or more views.
  • a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can also be notifications that appear on the status bar at the top of the system in the form of charts or scroll bar text, or in the form of dialog windows Notifications appear on the screen.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer includes at least display driver, camera driver, sensor driver, charging driver, etc.
  • the layers in the software structure shown in FIG. 3 and the components included in each layer do not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer layers than shown, and each layer may include more or fewer components, which is not limited in the present application.
  • the electronic device includes corresponding hardware and/or software modules for performing various functions.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software drives hardware depends on the specific application and design constraints of the technical solution. Those skilled in the art may use different methods to implement the described functions in combination with the embodiments for each specific application, but such implementation should not be regarded as exceeding the scope of the present application.
  • An embodiment of the present application provides an information recommendation method.
  • the electronic device in the embodiment of the present application can perform entity recognition when the user triggers entity recognition, and recommend associated applications that match the entity selected by the user, so that the user can directly open the associated applications of interest.
  • entities refer to things that exist objectively and can be distinguished from each other.
  • an entity may be understood as key information, or information of interest to a user.
  • an address entity can be understood as information indicating an address.
  • the identity card number entity can be understood as information indicating the identity card number.
  • the following uses the term "entity" to explain the embodiment of the present application. It can be understood that replacing the term "entity" with "key information” or "interesting information” can also be used as an explanation of the embodiment of the present application.
  • the address involved in the address entity can be a detailed street address, such as No. XX, XX Street, XX District, XX City, or a name of a scenic spot (such as the name of a famous cultural or natural scenic spot), or a famous building
  • the name may also be an address with corresponding information of interest on the map, and so on.
  • the specific implementation manners of this application can also be applied to entities that other users are interested in, such as courier tracking number entity, website entity, e-mail address entity, network disk download link entity (wherein, the user downloads through the network disk Links can be used to obtain information of interest stored in the network disk), password entities (such as Taobao password, Douyin password, etc.), etc., which will not be repeated in this application.
  • entities that other users are interested in such as courier tracking number entity, website entity, e-mail address entity, network disk download link entity (wherein, the user downloads through the network disk Links can be used to obtain information of interest stored in the network disk), password entities (such as Taobao password, Douyin password, etc.), etc., which will not be repeated in this application.
  • password entities such as Taobao password, Douyin password, etc.
  • One possible application scenario is: there is an entity that the user is interested in in the picture that the user performs the viewing operation or screenshot operation, and entity recognition needs to be performed on the picture at this time; another possible application scenario is: the text that the user performs the copy operation In , there are entities that the user is interested in, and entity recognition needs to be performed on the text at this time.
  • FIG 4 it is a schematic diagram of the interaction process of each module.
  • the process of the information recommendation method provided by the embodiment of the present application specifically includes:
  • the perception service receives an entity recognition trigger operation.
  • Entity recognition trigger operation refers to the operation that can trigger the mobile phone to execute the information recommendation method. Specifically, it can be a user operation that triggers the perception service to send an entity recognition command, so that the entity recognition application and the computing engine application can jointly complete the entity recognition and determine the to-be-recommended Associated application.
  • the entity recognition trigger operation may optionally be a user operation on a picture, for example, it may be a picture viewing operation, a screenshot operation, and the like. Wherein, the picture viewing operation may be an operation of viewing a picture in a gallery, or an operation of viewing a captured image in a camera shooting interface.
  • the perception service sends an entity recognition instruction to the entity recognition application.
  • the entity recognition instruction can be used to instruct the entity recognition application to recognize pictures, specifically, to perform character recognition and code (such as two-dimensional code) recognition on pictures, so that the entity recognition application can be combined with the computing engine application to jointly Complete entity recognition and determine associated applications to be recommended.
  • the picture for recognition indicated by the entity recognition command refers to the picture corresponding to the entity recognition trigger operation, which may be a picture opened by the user in the gallery, a captured image that can be viewed on the camera shooting interface, or a screenshot operation by the user Get the screenshot image etc.
  • the entity recognition instruction includes, but is not limited to, picture information to be recognized.
  • the entity recognition application performs text recognition and code recognition on the picture, and sends the text recognition and code recognition results to the computing engine application.
  • the entity recognition application receives the entity recognition instruction, determines the picture to be recognized according to the entity recognition instruction, and performs text recognition and code recognition on the picture.
  • the entity recognition application when it performs text recognition and code recognition on the picture, it may call an OCR (Optical Character Recognition, optical character recognition) application to complete the recognition.
  • OCR Optical Character Recognition, optical character recognition
  • the OCR application called by the entity recognition application can be installed on the mobile phone. In this way, the entity recognition application does not need to call the OCR application from the cloud to complete the recognition, which improves data security and eliminates users' concerns about data security.
  • the entity recognition application After the entity recognition application completes the text recognition and code recognition on the picture, it sends the text recognition result and the code recognition result to the computing engine application.
  • the text recognition result may include but not limited to whether there is text and the recognized text information
  • the code recognition result may include but not limited to whether there is a two-dimensional code and the recognized code information.
  • the computing engine application performs entity recognition according to the text recognition result and the code recognition result, and sends the entity recognition result to the entity recognition application.
  • the calculation engine application After receiving the text recognition result and code recognition result sent by the entity recognition application, the calculation engine application performs entity recognition according to the text recognition result and code recognition result, and determines one or more entities that the user may be interested in.
  • the computing engine application acquires preset multiple entity types that may be of interest to the user, and identifies one or multiple entities included in the picture according to the multiple entity types.
  • the preset entity types that users may be interested in include but are not limited to: telephone entity, address entity, courier number entity, ID number entity, QR code entity, website entity, email entity, password entity, language and text entity etc.
  • the address entity may include but not limited to country, province, city, district, street (road), number and other information.
  • a language text entity refers to a text corresponding to a language, such as an English text entity, a Korean text entity, a Chinese text entity, and the like.
  • the computing engine application may call an NLU (Natural Language Understanding, natural language understanding) application to implement entity recognition.
  • NLU Natural Language Understanding, natural language understanding
  • the NLU application invoked by the computing engine application can be installed on the mobile phone. In this way, the computing engine application does not need to call the NLU application from the cloud to complete entity recognition, which improves data security and eliminates users' concerns about data security.
  • the computing engine application After the computing engine application completes the entity recognition result, it sends the entity recognition result to the entity recognition application.
  • the entity recognition result includes, but is not limited to: entity content, and an entity type corresponding to the entity content. It should be noted that the entity recognition result includes all entity contents recognized in the picture, and the entity type corresponding to each entity content.
  • the entity recognition application adds an entity recognition result viewing icon to the first layer mask, and displays the first layer mask.
  • the entity recognition result viewing icon is used to indicate that a preset type of entity (or key information) is recognized in the image.
  • the entity recognition application After the entity recognition application receives the entity recognition result sent by the computing engine application, and determines that the entity recognition of the picture has been completed, and a preset type of entity is recognized, it will add the first layer mask on the current display interface of the mobile phone, and then add the first layer mask on the first Add an entity recognition result viewing icon in the layer mask (it can be anywhere). Wherein, the entity recognition result viewing icon is available for the user to click, so that the user can view the entity recognition result of the picture. It should be noted that only the entity recognition result view icon is included in the mask of the first layer. Add entity recognition result view icon via layer mask without damaging the image.
  • the entity recognition is applied to any corner of the mask of the first layer, such as the lower right corner or the lower left corner, and an entity recognition result viewing icon is added.
  • the entity recognition application displays the first layer mask
  • the user can view the entity recognition result viewing icon on the current display interface of the mobile phone, for example, the entity recognition result viewing icon can be viewed on the currently viewed picture, or it can be in the The entity recognition result viewing icon can be seen on the screenshot preview display interface.
  • the perception service receives an operation of clicking an entity recognition result viewing icon.
  • the user can trigger the display of the entity recognition result of the picture by clicking (such as single-clicking or double-clicking) the operation of viewing the entity recognition result icon.
  • the user may also trigger the display of the entity recognition result of the picture through other trigger operations of viewing the icon for the entity recognition result, such as a long press operation, which is not limited in this embodiment.
  • the perception service sends an entity labeling instruction to the entity recognition application.
  • the entity labeling instruction may be used to instruct the entity recognition application to perform entity labeling on the picture, and the entity labeling result may display the entity recognition situation of the picture to the user.
  • the entity recognition application marks the entity recognition result in the second layer mask, and displays the second layer mask.
  • the entity recognition application After receiving the entity labeling instruction, the entity recognition application obtains the entity recognition result of the picture, adds a second layer mask on the current display interface of the mobile phone, and marks the entity recognition result of the picture in the second layer mask.
  • the label is available for the user to click, so that the user can select an entity that the user is actually interested in.
  • the entity recognition result obtained by the entity recognition application in addition to including all the entity content identified in the picture and the entity type corresponding to each entity content, it also includes each entity content in the picture or screenshot preview image
  • the location information of may be coordinate information.
  • Entity recognition is applied in the mask of the second layer. According to the position information of the entity content in the picture or screenshot preview image, the corresponding entity content is marked.
  • the entity recognition is applied in the second layer mask, and the entity content is marked on the corresponding entity content according to the position information of the entity content in the picture or screenshot preview image and the labeling method matching the entity type.
  • the labeling methods can be divided into two types, one is for text-type entities, such as phone number entities, address entities, etc., and the other is for code-type entities, such as two-dimensional code entities.
  • entity recognition applications can use the method of underlining, that is, to draw a line under the content of text-type entities; Add dots to the content. It should be noted that only the physical annotations of the image are included in the second layer mask. Similarly, adding physical annotations through layer masks will not cause damage to the image.
  • the entity recognition application finishes labeling the entity recognition results in the second layer mask, it displays the second layer mask and cancels the display of the first layer mask. At this point, the user views one or more entity annotations of the picture on the current display interface.
  • the perception service receives an operation of clicking on the entity label.
  • the user may click (for example, click or double-click) any entity annotation to trigger the display of the recommendation service matching the entity annotation (or entity content corresponding to the entity annotation).
  • the user may also trigger the display of the recommended service matching the entity tag through other trigger operations, such as a long press operation, on the tag of the entity, which is not limited in this embodiment.
  • the perception service sends a recommended application display instruction to the computing engine application.
  • the recommended application display instruction may be used to instruct the computing engine application to recommend related applications, and to display related applications in conjunction with the entity recognition application. Wherein, the displayed related applications can be clicked by the user to realize the function opening of the related applications.
  • the recommended application display instruction includes, but is not limited to: the entity type and entity content corresponding to the entity annotation clicked by the user.
  • the calculation engine application recommends an associated application matching the clicked entity annotation, and sends the associated application information to the entity recognition application.
  • the computing engine application receives the recommended application display instruction, parses the recommended application display instruction, determines the entity type of the associated application to be recommended, and recommends the associated application (also called associated service) that matches the entity type, that is, recommends and clicks entity labels Matching associated apps.
  • the entity type as an address entity
  • the associated services recommended by the computing engine application corresponding to the address entity include but are not limited to: display on the map, get route, copy, add to address book, add to memo, share, etc.
  • the computing engine application may set priorities for the various associated services respectively.
  • the computing engine application when it recommends associated services corresponding to entity types, it may recommend associated services corresponding to entity types according to default recommendation rules corresponding to entity types, and set priorities of various associated services.
  • the default recommendation rule may be determined in combination with big data analysis results.
  • the calculation engine application obtains various user intentions after the user daily obtains information of a certain entity type, and the priority of various user intentions. These user intentions can be determined by the cloud combined with big data statistics.
  • the user's intentions after daily acquisition of the phone number include but are not limited to: making a call, sending a text message, adding to the address book, copying, sharing, etc., you can recommend based on these user intentions
  • the associated service corresponding to the PhoneNumber entity type Assuming that among these user intentions, the frequency of making a phone call is the highest, the priority of the associated service of making a phone call can be set as the highest priority.
  • the calculation engine application recommends associated services corresponding to entity types, if it is determined that multiple applications corresponding to the same type of associated services are installed in the mobile phone, the multiple applications may be recommended at the same time.
  • the computing engine application recommends a map service, if it is determined that the mobile phone has two applications A map and B map installed, the two map applications may be recommended at the same time.
  • the computing engine application when it recommends associated services corresponding to entity types, it may also recommend associated services corresponding to entity types in combination with user usage habits, and set priorities for various associated services. That is to say, the calculation engine application learns the user's habit of operating the mobile phone, and recommends an associated service corresponding to the entity type according to the learning result.
  • the calculation engine application learns the user's habit of operating the mobile phone, and recommends an associated service corresponding to the entity type according to the learning result.
  • the address entity type as an example, assume that after obtaining the address, the user always inquires about the address or shares the address, and never adds the address to the contacts in the address book.
  • the calculation engine application learns the user's operating habits, and can use the query address and the shared address as recommended associated services corresponding to the address entity, and set the priority of the query address and the shared address as the highest priority.
  • the computing engine application when it recommends associated services corresponding to entity types, it can also combine user operations and user portraits obtained by the perception service to recommend associated services corresponding to entity types, and set the priority of various associated services class.
  • the schedule information includes the meeting time and meeting details, but does not include the meeting address.
  • the perception service perceives that the meeting details screenshot saved in the schedule is checked before the meeting starts (for example, one hour ago), and the entity label corresponding to the meeting address is clicked in the entity recognition result of the meeting details screenshot.
  • the user portrait obtained by the perception service is: the user never drives, but travels by taxi or by public transportation.
  • the perception service sends the obtained user operations and user portraits to the computing engine application, and the computing engine application can combine user operations and user portraits to make service recommendations.
  • the computing engine application can determine with a high probability that the user wants to take a taxi to the meeting address, and then recommend the associated service corresponding to the meeting address as a taxi service, and recommend the meeting address as the destination address of the taxi service.
  • the calculation engine application After the calculation engine application completes the recommendation of the associated application matching the clicked entity label, it sends the corresponding associated application information to the entity recognition application.
  • the associated application information includes but not limited to: the name of the associated application, and the recommendation priority of the associated application.
  • the entity recognition application displays a recommended application list card or a recommended application icon.
  • the entity recognition application can display the associated application information after receiving the associated application information sent by the computing engine application.
  • the entity recognition application displays the identification of the associated application, such as an icon and a brief description of the function.
  • the entity recognition application can directly obtain the icon of the recommended application and display the icon for the user to click to start the application; when the recommended application sent by the computing engine application When the number of is more than one, the entity recognition application can display the icons in the form of list cards after obtaining the icons of the multiple recommended applications, so that the user can select a recommendation and click to open it.
  • the entity recognition application when the entity recognition application displays the icon of the recommended application, it may also correspondingly display the name of the recommended application or a brief description of the application function (such as "open in map”, “add to memo”, etc.).
  • the entity recognition application when the entity recognition application displays the icons of multiple recommended applications in the form of list cards, it may also display the entity content matching the entity label clicked by the user in one row (for example, the first row), In order to facilitate the user to confirm the accuracy of the entity recognition result.
  • the application operation management service sends an application opening instruction to the recommended application in response to the click operation on the recommended application.
  • the user can click the icon of the recommended application to open the recommended application; when the recommended application list card displayed by the entity recognition application includes recommended applications that meet the user's intention, The user may click the icon of the recommended application to start the recommended application.
  • the perception service receives the user's click operation on the recommended application, and sends an application opening instruction for the recommended application to the application operation management service. After receiving the application opening instruction, the application operation management service sends an application opening instruction to the corresponding recommended application.
  • the recommended application After receiving the application opening instruction, the recommended application executes the application opening operation, and sends an indication message that the application has been opened to the application operation management service after the launching is completed.
  • the application operation management service sends indication information that the recommended application has been started to the entity recognition application.
  • the application operation management service receives the indication information that the recommended application has been opened, and sends the indication information to the entity recognition application.
  • the entity recognition application sends the entity content matching the clicked entity annotation to the recommendation application.
  • the entity recognition application After the entity recognition application receives the indication information that the recommended application has been opened, it confirms that the recommended application is opened, and then can send the entity content matching the clicked entity label to the recommended application, such as address information, phone number information, etc.
  • the recommended application implements corresponding application functions according to the entity content.
  • the recommended application receives the entity content that matches the clicked entity label, and adds the entity content to the matching information editing place, and then can realize the corresponding function of the recommended application based on the entity content.
  • the recommended application realizes the corresponding function of the recommended application based on the entity content, which may be to directly realize the corresponding function, or may realize the corresponding function under the relevant operation of the user.
  • the phone receives the phone number and adds it to the dial number editor, and then directly realizes the call function based on the phone number.
  • the taxi-hailing application receives the address information and adds it to the destination editor. At this time, the departure location of the taxi-hailing application can default to the current location, and the user clicks to confirm. It can realize the taxi-hailing function of the taxi-hailing application.
  • the mobile phone triggered by the user's action, performs entity recognition on the picture corresponding to the action, and displays the entity recognition result viewing icon when the entity recognition is completed, so that the user can click to view the entity recognition result of the picture. If the user clicks on the annotation information of a certain entity in the picture, the mobile phone can recommend related services that meet the user's intention for the user to choose, thereby improving the user experience.
  • the information recommendation method provided by the embodiment of the present application will be explained below in conjunction with the application scenario shown in FIG. 5 , that is, the application scenario where the user takes a screenshot on the mobile phone.
  • the user double-clicks the mobile phone screen to take a screenshot of the waybill details interface.
  • the perception service of the mobile phone receives the user's screenshot operation, and in response to the user's operation behavior, the perception service of the mobile phone sends an entity recognition instruction to the entity recognition application, so that the entity recognition application combines with the computing engine application to complete the entity recognition operation on the screenshot image.
  • the screenshot preview interface can refer to Figure 5(2); after the entity recognition is completed, the entity recognition application displays the entity recognition result viewing icon, and the preview interface at this time can refer to Figure 5(3).
  • an entity recognition result viewing icon 501 is exemplarily displayed in the lower right corner of the display interface of the mobile phone.
  • the mobile phone's perception service receives the user's click operation, and in response to the user's operation behavior, the mobile phone's perception service sends an entity labeling instruction to the entity recognition application, so that the entity recognition application completes the entity labeling operation.
  • the entity recognition application performs entity labeling operations based on the entity recognition results. Since the entities involved in the entity recognition results belong to text-based entities, the entities involved in the entity recognition results can be tagged (underlined) 502. label.
  • the marked entities include the courier number entity, the telephone number entity and the address entity. Assume that the entity that the user is interested in is the address entity marked with the markup 5021 , and the user can click on the markup, that is, click on the markup 5021 .
  • the perception service of the mobile phone receives the user's click operation, and in response to the user's operation behavior, sends a recommended application display instruction to the computing engine application, so that the computing engine application can combine with the entity recognition application to complete the operation of recommending related services.
  • the entity recognition application may display a recommended application list card 503 for the user.
  • each row displays an icon of a recommended application and a brief description of application functions, and these recommended applications can be arranged in descending order of priority.
  • the icon of the recommended application is displayed on the left, and a brief description of the application function is displayed on the right.
  • FIG. 5(5) when there are multiple recommended applications, the entity recognition application may display a recommended application list card 503 for the user.
  • each row displays an icon of a recommended application and a brief description of application functions, and these recommended applications can be arranged in descending order of priority.
  • the icon of the recommended application is displayed on the left, and a brief description of the application function is displayed on the right.
  • the first line 5031 may also display the entity content matching the entity tag clicked by the user, such as the identified address information, for the user to verify whether information of interest.
  • an icon is displayed on the left side of the entity content, which is used to indicate the entity content identified by this behavior.
  • the map application has been opened, and the address information filled in its destination address editing place 504 is the address in the first line 5031 of the recommended application list card 503, that is, the entity content matching the label 5021 clicked by the user .
  • the route can refer to the route in Figure 5(6).
  • the information recommendation method provided by the embodiment of the present application will be explained below in conjunction with the application scenario shown in FIG. 6 , that is, the application scenario in which the user views pictures in a gallery.
  • the user clicks on a picture 601 in the mobile phone gallery to view the picture.
  • the mobile phone's perception service receives the user's picture viewing operation, and in response to the user's operation behavior, the mobile phone's perception service sends an entity recognition command to the entity recognition application, so that the entity recognition application combines the computing engine application to complete the entity recognition operation for the user to view the picture .
  • the entity recognition application displays the entity recognition result viewing icon, and at this time, the picture display interface can refer to FIG. 6(2).
  • the entity recognition result viewing icon 602 is exemplarily displayed in the lower right corner of the display interface of the mobile phone. As shown in FIG. 6( 2 ), the user clicks the entity recognition result viewing icon 602 .
  • the mobile phone's perception service receives the user's click operation, and in response to the user's operation behavior, the mobile phone's perception service sends an entity labeling instruction to the entity recognition application, so that the entity recognition application completes the entity labeling operation.
  • the entity recognition application performs entity labeling operations according to the entity recognition results. Since the entity code entities involved in the entity recognition results can use annotations (such as points) 603 to identify the entities involved in the entity recognition results (that is, QR code) for labeling.
  • the entity recognition application may display a recommended application list card 604 for the user.
  • each row displays an icon of a recommended application and a brief description of application functions, and these recommended applications can be arranged in descending order of priority.
  • the entity recognition result of the two-dimensional code can be displayed in the first line 6041 of the recommended application list card 604, such as link information.
  • the first row of the recommended application list card an icon is displayed on the left side of the entity recognition result to indicate that this behavior is the entity recognition result.
  • the recommended application list card 604 includes a recommended application that meets the user's intention, such as "open in APP"
  • the user can click the APP icon 6042 in the corresponding row in the recommended application list card 604 to realize the recognition of opening in the APP.
  • the obtained two-dimensional code, the recognition result of the two-dimensional code can refer to Fig. 6 (5) for example.
  • the information recommendation method provided by the embodiment of the present application will be explained below in conjunction with the application scenario shown in FIG. 7 , that is, the application scenario in which the user takes pictures on the mobile phone.
  • the user clicks the camera icon 701 on the camera interface of the mobile phone to complete the camera operation.
  • the user wants to check the pictures taken, he can click the picture viewing icon 702 in the picture taking interface as shown in FIG. 7 (2).
  • the mobile phone's perception service receives the user's operation of viewing the pictures taken.
  • the mobile phone's perception service sends an entity recognition command to the entity recognition application, so that the entity recognition application combines the calculation engine application to complete the entity recognition of the captured pictures.
  • the entity recognition application displays the entity recognition result viewing icon.
  • the viewing interface of the captured pictures can refer to Figure 7 (3).
  • the entity recognition result viewing icon 703 is exemplarily displayed in the lower right corner of the display interface of the mobile phone.
  • the mobile phone's perception service receives the user's click operation, and in response to the user's operation behavior, the mobile phone's perception service sends an entity labeling instruction to the entity recognition application, so that the entity recognition application completes the entity labeling operation.
  • the entity recognition application performs entity labeling operations based on the entity recognition results.
  • the entities involved in the entity recognition results can be tagged (as underlined) 704. label.
  • the marked entities include the identity card number entity and the address entity.
  • the entity that the user is interested in is the identity card number entity marked with the mark 7041 , and the user can click on the mark, that is, click on the mark 7041 .
  • the perception service of the mobile phone receives the user's click operation, and in response to the user's operation behavior, sends a recommended application display instruction to the computing engine application, so that the computing engine application can combine with the entity recognition application to complete the operation of recommending related services. Referring to FIG.
  • the entity recognition application may display a recommended application list card 705 for the user.
  • the recommended application list card 705 each row displays an icon of a recommended application and a brief description of application functions, and these recommended applications can be arranged in descending order of priority.
  • the first line 7051 can also display the entity content that matches the entity label clicked by the user, such as an identification number, for the user to verify whether information of interest.
  • an icon is displayed on the left side of the entity recognition result to indicate that this behavior is the entity recognition result.
  • the user can click the icon 7052 of the recommended application to realize the ID card in the first line 7051 of the recommended application list card 705 No. is added to the memorandum, refer to Figure 7(6).
  • the memo has been opened, and the ID number filled in the memo editing interface is the address in the first line 7051 of the recommended application list card 705, that is, the entity content matching the label 7041 clicked by the user.
  • the user clicks the brief description of application functions corresponding to the icon of the recommended application in the recommended application list card, which can achieve the same effect as clicking the icon of the recommended application.
  • the information recommendation method provided by the embodiment of the present application will be explained in conjunction with the application scenario shown in FIG. 7 , that is, the application scenario where the user takes pictures on the mobile phone.
  • the perception service of the mobile phone receives the user's camera operation, and in response to the user's operation behavior, the perception service of the mobile phone sends an entity recognition command to the entity recognition application, so that the entity recognition application combines with the computing engine application to complete the entity recognition operation of the captured pictures.
  • the user wants to check the pictures taken he can click the picture viewing icon 702 in the picture taking interface as shown in FIG. 7 (2).
  • the entity recognition application will display the entity recognition result viewing icon.
  • the viewing interface of the captured pictures can refer to Figure 7 (3).
  • the entity recognition result viewing icon 703 is exemplarily displayed in the lower right corner of the display interface of the mobile phone.
  • the mobile phone's perception service receives the user's click operation, and in response to the user's operation behavior, the mobile phone's perception service sends an entity labeling instruction to the entity recognition application, so that the entity recognition application completes the entity labeling operation.
  • FIG. 7( 4 ) to FIG. 7( 6 ) reference may be made to the foregoing, and details will not be repeated here.
  • the embodiment of the present application can complete the identification through image entity recognition, and remind the user that the entity recognition of the picture just taken has been completed. This embodiment can avoid the phenomenon that the user clicks to view the photograph taken too early, but cannot view the result of the entity recognition of the picture, thereby improving the user experience.
  • the perception service receives a camera operation.
  • the perception service sends an entity recognition instruction to the entity recognition application.
  • the perception service determines that the picture satisfies the entity recognition condition, it sends the entity recognition instruction to the entity recognition application.
  • the entity identification condition may optionally be the third identification condition described below, which will not be repeated here.
  • the entity recognition application performs text recognition and code recognition on the picture, and sends the text recognition and code recognition results to the computing engine application.
  • the computing engine application performs entity recognition according to the text recognition result and the code recognition result, and sends the entity recognition result to the entity recognition application.
  • the entity recognition application adds an entity recognition result viewing icon to the first layer mask, and displays the first layer mask.
  • the entity recognition application adds a picture entity recognition completion mark in the photographing interface.
  • the picture entity recognition completion mark is used to indicate that the image taken by the user at the previous moment has been recognized, and a preset type of entity (or key information) has been recognized in the image.
  • the entity recognition application can add a picture entity recognition completion mark in the photographing interface (it can be at any position), so as to prompt the user to complete the entity recognition of the picture that has just been taken.
  • the entity recognition application may add a picture entity recognition completion mark 707 at the taken picture viewing icon 702 in the photographing interface.
  • the entity recognition result viewing icon 703 in FIG. 7(3) please refer to the above-mentioned relevant explanations, and details will not be repeated here.
  • the process part after S806 can refer to S406-S417 shown in FIG. 4 , and the relevant explanation of this embodiment can be referred to the foregoing embodiment, which will not be repeated here.
  • FIG. 10 is a schematic diagram of the interaction process of each module. Referring to FIG. 10, the flow of the information recommendation method provided by the embodiment of the present application specifically includes:
  • the perception service receives an entity recognition trigger operation.
  • the perception service sends an entity recognition instruction to the entity recognition application when determining that the picture satisfies the entity recognition condition.
  • the perception service when the perception service receives the entity recognition trigger operation, it no longer directly sends the entity recognition command to the entity recognition application, but judges whether the picture corresponding to the entity recognition trigger operation satisfies the entity recognition If the condition is satisfied, the entity recognition command is sent to the entity recognition application, otherwise, the entity recognition command is not sent to the entity recognition application.
  • the types of user entity recognition trigger operations are different, and the corresponding entity recognition conditions may be different.
  • the entity recognition trigger operation can be divided into two types, one is a screenshot operation, and the other is an image viewing operation.
  • the entity identification conditions related to the screenshot operation may include but not limited to: a first identification condition related to the application information corresponding to the screenshot operation, and a second identification condition related to the user's habit of sharing screenshots.
  • the entity recognition conditions related to the image viewing operation may include but not limited to: a third recognition condition related to the camera shooting classification result (or camera shooting mode), and a third recognition condition related to the application information corresponding to the screenshot operation. 1. Identification conditions.
  • the first identification condition may be that the application program belongs to a preset application program set.
  • the first identification condition is used to indicate an application program that requires entity identification. That is, each application program in the preset application set is an application program that requires entity identification.
  • the preset application set may include applications that usually involve entity recognition, such as WPS applications, PPT applications, etc., and the preset application set may not include music applications, taxi applications, and other applications that do not involve entity recognition. .
  • the second recognition condition may be a set of preset user operation habits.
  • the second recognition condition is used to indicate compliance with the user's operating habits for real-time entity recognition. That is to say, each user's operating habit in the preset user's operating habit set is in line with the user's operating habit for real-time entity recognition.
  • the preset set of user operation habits may include the user's screenshot sharing habit of opening a screenshot and then sharing it. The preset set of user operating habits does not include direct sharing without opening screenshots. If the user's screenshot sharing habit is direct sharing without opening the screenshot (such as directly sliding up to share after taking a screenshot), then the user's screenshot sharing habit does not meet the second identification condition.
  • the third identification condition may be that the camera shooting classification result belongs to a preset classification set (or it is called that the camera shooting mode belongs to a preset mode set).
  • the third recognition condition is used to indicate a camera shooting category or a camera shooting mode that requires entity recognition.
  • each camera shooting classification in the preset classification set is a camera shooting classification that requires entity recognition.
  • the preset classification set may include classifications that generally involve entity recognition, such as documents. Classifications that do not involve entity recognition, such as landscapes and portraits, are not included in the preset classification set. Whether it is the camera shooting classification result or the camera shooting mode, it may include a classification label (or mode label) for further classification or identification.
  • the same camera shooting mode may include multiple mode tags, some mode tags indicate that there is an entity recognition requirement, and some mode tags indicate that there is no entity recognition requirement.
  • FIG 11 is a schematic diagram of the judgment process of the perception service.
  • the judgment process of the perception service for whether the picture meets the entity recognition conditions specifically includes:
  • the sensing service receiving entity identifies the trigger operation and determines the operation type. If the operation type is a screenshot operation, the perception service executes S1102; if the operation type is a picture viewing operation, the perception service executes S1105.
  • the perception service acquires application information corresponding to the screenshot operation.
  • the application displayed on the display interface of the mobile phone is the application corresponding to the screenshot operation.
  • the application information includes but is not limited to an application name.
  • the perception service judges whether the application information satisfies the entity identification condition, if so, execute S1104, otherwise, execute S1108.
  • the entity identification condition may refer to the above-mentioned first identification condition.
  • the perception service judges whether the application information satisfies the above-mentioned first recognition condition, and if so, judges other entity recognition conditions; otherwise, determines that the picture does not meet the entity recognition condition.
  • the perception service judges whether the user's screenshot sharing habit satisfies the trigger recognition condition of the screenshot operation, if so, executes S1109, otherwise executes S1101.
  • the screenshot operation trigger recognition condition may refer to the above-mentioned second recognition condition.
  • the perception service acquires the user's screenshot sharing habit, and judges whether the user's screenshot sharing habit satisfies the above-mentioned second identification condition. If yes, the perception service determines that the picture satisfies the entity recognition condition, and then can send the entity recognition instruction to the entity recognition application. If not, the perception service determines that the picture does not meet the entity recognition condition. At this time, the perception service needs to sense whether the user's next operation is to view the screenshot, and re-execute the process of judging whether the picture meets the entity recognition condition.
  • the perception service determines the source of the picture. If the source of the picture is taken by a camera, perform S1106 for the perception service; if the source of the picture is a screenshot of an application, perform S1102 for the perception service.
  • the perception service When a user performs an image viewing operation, the perception service needs to determine the source of the image.
  • the perception service may determine the image source in the attribute information of the image.
  • the perception service obtains the camera shooting classification result.
  • the perception service determines that the source of the picture is taken by a camera
  • the perception service needs to further obtain the camera classification result of the picture.
  • the perception service can determine the camera classification result of the picture in the attribute information of the picture.
  • the perception service judges whether the classification result of the camera shooting satisfies the entity recognition condition, if so, execute S1109, otherwise, execute S1108.
  • the entity identification condition may refer to the above-mentioned third identification condition.
  • the perception service obtains the classification result of the camera shooting of the picture, and judges whether the classification result of the camera shooting satisfies the above-mentioned third identification condition. If yes, the perception service determines that the picture satisfies the entity recognition condition, and then can send the entity recognition instruction to the entity recognition application. If not, the perception service determines that the picture does not meet the entity recognition condition.
  • the perception service obtains the camera shooting mode of the picture, and judges whether the icon meets the entity recognition condition by judging whether the camera shooting mode meets the entity recognition condition.
  • the entity identification condition may refer to the above-mentioned third identification condition.
  • the perception service acquires the shooting mode of the picture, and judges whether the shooting mode satisfies the corresponding third identification condition. If yes, the perception service determines that the picture satisfies the entity recognition condition, and then can send the entity recognition instruction to the entity recognition application. If not, the perception service determines that the picture does not meet the entity recognition condition.
  • the attribute information of the pictures stored in the gallery captured by the camera may include various identifications, such as the camera shooting classification result (or camera shooting mode), and one or more classification labels (or mode label )wait. Furthermore, when the perception service judges whether a picture satisfies the entity recognition condition, it can be determined by judging whether various identifiers included in its attribute information satisfy the entity recognition condition.
  • the perception service determines that the picture does not meet the entity recognition condition.
  • the perception service determines that the picture satisfies the entity recognition condition.
  • the entity recognition application performs text recognition and code recognition on the picture, and sends the text recognition and code recognition results to the computing engine application.
  • the computing engine application performs entity recognition according to the text recognition result and the code recognition result, and sends the entity recognition result to the entity recognition application.
  • the entity recognition application adds an entity recognition result viewing icon to the first layer mask, and displays the first layer mask.
  • the flow after S1005 can refer to S406-S417 shown in FIG. 4 , and the related explanation of this embodiment can be referred to the foregoing embodiment, which will not be repeated here.
  • FIG 12 is a schematic diagram of the interaction process of each module.
  • the process of image entity recognition provided by the embodiment of the present application specifically includes:
  • the perception service receives a charging operation.
  • the charging operation may be the operation of connecting the mobile phone to the mains for charging through a charger, or the operation of connecting the mobile phone to an electronic device (such as a power bank or other terminals) through a data cable for charging.
  • the perception service sends an entity recognition instruction to the entity recognition application.
  • the perception service sends an entity recognition instruction to the entity recognition application in response to the user's charging operation.
  • the entity recognition instruction may be used to instruct the entity recognition application to perform batch image entity recognition.
  • the entity recognition application sequentially performs image text recognition and code recognition on the unrecognized pictures in the gallery, and sends the text recognition and code recognition results to the computing engine application.
  • the computing engine application performs entity recognition according to the text recognition result and the code recognition result, and sends the entity recognition result to the entity recognition application.
  • the entity recognition application adds an entity recognition result viewing icon to the first layer mask, and displays the first layer mask.
  • the perception service receives the stop charging operation.
  • the perception service sends an entity recognition stop instruction to the entity recognition application.
  • the perception service sends an entity recognition stop instruction to the entity recognition application in response to the user's stop charging operation.
  • the entity recognition stop instruction may be used to instruct the entity recognition application to stop performing picture entity recognition.
  • the entity-unrecognized pictures refer to pictures that have not been subjected to entity recognition, excluding pictures that cannot be subject to entity recognition or whose entity recognition results are empty.
  • the perception service can determine the picture that cannot be identified based on the first identification condition related to the application information corresponding to the screenshot operation and the third identification condition related to the camera classification result, and Label these pictures.
  • the entity recognition application may also, according to whether the entity recognition result sent by the computing engine application is empty, identify the picture whose entity recognition result is empty.
  • the entity recognition application combines the identification of the pictures in the gallery to sequentially obtain an unrecognized picture of the entity, and combines the calculation engine application to complete the entity recognition of the unrecognized picture of the entity, until the entity recognition of all the unrecognized pictures in the gallery is completed, or Yes until you stop charging your phone.
  • Figure 13(1) exemplarily shows an entity unrecognized picture in the mobile phone gallery (there is an identifiable entity in the picture, but there is no entity recognition on the display interface. Result viewing icon), after the charging operation as shown in Figure 13(2), the change of the display interface of the entity's unidentified picture can refer to Figure 13(3).
  • an entity recognition result viewing icon 1301 is displayed on the picture display interface, which means that the entity recognition of the picture has been completed and the user can view the entity recognition result.
  • the flow of the information recommendation method at this time can continue to refer to the previous embodiment, and will not be repeated here.
  • the process of entity recognition can be parallelized. In this way, this embodiment combines the real-time mode and the non-real-time mode using image entity recognition to realize effective control of the power consumption of the mobile phone.
  • FIG 14 it is a schematic diagram of the interaction process of each module.
  • the process of the information recommendation method provided by the embodiment of the present application specifically includes:
  • the perception service receives an operation of copying text to a clipboard.
  • the perception service sends an entity recognition instruction to the entity recognition application.
  • the entity recognition instruction may be used to instruct the entity recognition application to perform entity recognition on the text copied by the user.
  • the text copied by the user may include one or more characters.
  • the entity recognition application sends the copied text to the calculation engine application.
  • the entity recognition application does not need to repeatedly perform text recognition on the text copied by the user, but can directly obtain the text copied to the clipboard by the user, and send it to the calculation engine application for entity recognition.
  • the computing engine application performs entity recognition according to the received text, and determines a recommended application corresponding to the entity recognition result.
  • the calculation engine application performs entity recognition based on the received text, and judges whether the received text contains only one kind of entity, such as only address entity, or to include phone number entity, etc., and if so, according to the entity type
  • the associated application is recommended, and the entity recognition result and the associated application matching the entity recognition result are sent to the entity recognition application. If the calculation engine application recognizes that the received text contains multiple entities, such as address entities and phone number entities, the calculation engine application may not perform the recommended operation of the associated application, and at the same time, it may also indicate "unrecognizable The entity recognition result of "Single Entity" is sent to the entity recognition application.
  • the computing engine application may recognize the text as an entity to be translated, and then recommend a translation application.
  • the calculation engine application initially recommends an associated application corresponding to the entity recognition result.
  • the computing engine application sends instruction information to the recommended application SDK.
  • the indication information is used to instruct the recommended application SDK to determine whether the copied text corresponds to the application.
  • the instructions include but are not limited to copied text.
  • the computing engine application sends instruction information to the SDK of each recommended application, so that each recommended application SDK separately confirms the recommended application.
  • the recommended application SDK performs semantic analysis on the received copied text to determine whether it is the text corresponding to the application. If yes, the recommended application SDK confirms that this application is a recommended application.
  • the recommended application SDK confirms the associated application initially recommended by the calculation engine application, which further ensures the accuracy of the recommended application.
  • the recommended application SDK sends instruction information to the computing engine application.
  • the indication information is used to indicate to the computing engine application whether the application is confirmed as a recommended application.
  • the indication information includes but is not limited to a confirmation identifier and a negative identifier.
  • Each recommended application SDK that receives the indication information sent by the calculation engine application will feed back indication information to the calculation engine application to indicate to the calculation application whether the application is a recommended application.
  • the calculation engine application sends the entity recognition result and the recommended application information to the entity recognition application according to the instruction information sent by the recommended application SDK.
  • the calculation engine application will finally confirm the recommended application as a recommended application; if the indication information sent by the recommended application SDK indicates that it is not a recommended application, the calculation engine application will no longer recommend the application application.
  • the calculation engine application generates the final recommended application information according to the instruction information sent by each recommended application SDK, and sends it to the entity recognition application.
  • the entity recognition application displays a recommended application list card or a recommended application icon.
  • the entity recognition application can directly obtain the icon of the recommended application and display the icon for the user to click to start the application; when the recommended application sent by the computing engine application When the number of is more than one, the entity recognition application can display the icons in the form of list cards after obtaining the icons of the multiple recommended applications, so that the user can select a recommendation and click to open it.
  • the calculation engine application since entity recognition is triggered by the user's operation of copying text to the clipboard, the calculation engine application does not need to repeatedly recommend the copy service.
  • the duplicate service will not be included in the recommended application list card displayed by the entity recognition application, thereby avoiding the problem of repeated recommended services.
  • the application operation management service sends an application opening instruction to the recommended application in response to the click operation on the recommended application.
  • the application operation management service sends indication information that the recommended application has been started to the entity recognition application.
  • the entity recognition application sends the identified entity content to the recommendation application.
  • the recommended application implements corresponding application functions according to the entity content.
  • an icon 1501 including a "copy” option and a “search” option appears in the browsing interface of the mobile phone.
  • the perception service of the mobile phone receives the user's operation of copying text to the clipboard.
  • the perception service of the mobile phone sends an entity recognition command to the entity recognition application to make the entity
  • the recognition application combines with the calculation engine application to complete the entity recognition operation of the user's copied text and the operation of recommending related services.
  • the computing engine application recognizes that the text copied by the user belongs to the entity to be translated, so a translation application is recommended.
  • the entity recognition application is a translation application icon 1502 for the user to click to start the translation application.
  • the translation application icon 1502 is displayed in the form of a floating ball. If the translation application meets the user's intention, the user can click on the translation application icon 1502 to translate the text copied by the user.
  • the translation application is opened in window 1503, and the text copied by the user is displayed at the original text editor, and the translation function from the original text to the translated text is realized.
  • an icon 1601 including a "copy” option and a “search” option appears in the browsing interface of the mobile phone.
  • the perception service of the mobile phone receives the user's operation of copying text to the clipboard.
  • the perception service of the mobile phone sends an entity recognition command to the entity recognition application, so that the The recognition application combines with the calculation engine application to complete the entity recognition operation of the user's copied text and the operation of recommending related services.
  • the computing engine application recognizes that the text copied by the user belongs to the phone number entity, so it recommends multiple recommended applications that match the phone number entity.
  • the entity recognition application may display a recommended application list card 1602 for the user.
  • each row displays an icon of a recommended application and a brief description of application functions, and these recommended applications can be arranged in descending order of priority.
  • the first line 16021 can also display the entity content that matches the entity label clicked by the user, such as the identified phone number, for the user to verify whether information of interest.
  • an icon is displayed on the left side of the entity recognition result to indicate that this behavior is the entity recognition result.
  • the copy service is also a recommended service that matches the phone number entity, in this application scenario, the copy service does not need to be repeatedly recommended, so the recommended application list card 1602 does not include the copy service.
  • the recommended application list card 1602 includes a recommended application that meets the user's intention, such as "send a message”
  • the user can click the icon 16022 of the recommended application to send a message to the copied phone number, see Figure 16(3).
  • the information application has been opened, and the telephone number filled in the addressee's telephone number editing section 16031 in the information application display interface 1603 is the telephone number displayed in the first row 16021 of the recommended application list card 1602, that is, The text copied by the user.
  • the mobile phone triggered by the action of the user copying the text to the clipboard, the mobile phone performs entity recognition on the copied text, and recommends related services that meet the user's intention based on the entity recognition result for the user to choose, thus improving the user's convenience. Use experience.
  • a foldable screen mobile phone may also be used to implement the flow of the information recommendation method, which will not be repeated here.
  • display interfaces of different recommendation services may be displayed in different display areas.
  • FIG. 17 is a schematic diagram of a recommended application display area.
  • the recommended application when the foldable mobile phone is in the folded state (or bar state), the recommended application can be opened directly in the display area 1701, refer to the examples in the foregoing embodiments, and will not be repeated here.
  • the recommended application when the folding screen mobile phone is in the unfolded state, the recommended application can be opened directly in the display area 1702, which is similar to the display in Figure 17(1), except for the size of the display area, which will not be repeated here. .
  • the folding screen mobile phone When the folding screen mobile phone is in the unfolded state, different applications can be displayed in the left and right display areas of the folding screen mobile phone, and the display interface of the recommended application can be displayed in the left display area or the right display area.
  • the original application the application that the user performs the entity recognition trigger operation
  • the recommended application can be opened in the display area 1704, so that the user can view the two applications at the same time.
  • the display interface of the application Referring to the application scenario shown in FIG. 18 , when the folding screen mobile phone is in the unfolded state, different applications are displayed in the left and right display areas of the folding screen. As shown in FIG.
  • the browsing application interface is displayed in the display area 1801
  • the chat application interface is displayed in the display area 1802 .
  • the user takes a screenshot in the browsing application, triggers entity recognition on the screenshot, and recommends services based on the entity selected by the user.
  • the user selects the map service "Open in Map" to open the map application.
  • the screenshot interface of the browsing application continues to be displayed in the display area 1801
  • the map application can be displayed in the display area 1802 as a recommended application.
  • the left and right display areas of the folding screen mobile phone can display different applications, and the recommended application can also be opened in the half-screen card window.
  • the recommended application is not an independent APP, it may be opened in a half-screen card window. Referring to FIG. 17(4), for example, the interface of the original application 1 (the application for which the user performs the entity recognition trigger operation) is displayed in the display area 1705, and the interface of the recommended application can be displayed in the window (half-screen card) on the display area 1705. window) in 1706.
  • the recommended application can also be opened in the window (half-screen card window) 1706, which is not limited.
  • the window half-screen card window 1706
  • FIG. 19 when the folding screen mobile phone is in the unfolded state, different applications are displayed in the left and right display areas of the folding screen.
  • the interface of the browse application is displayed in the display area 1901
  • the interface of the note application is displayed in the area 1902 .
  • the user copies the text to the clipboard in the browsing application, triggers the entity recognition of the copied text, and recommends services.
  • the user selects the recommended translation application and starts the translation application.
  • FIG. 19( 2 ) the browsing application interface continues to be displayed in the display area 1901
  • the memo application interface continues to be displayed in the display area 1902
  • the translation application can be displayed in the window 1903 as a recommended application.
  • the folding screen mobile phone When the folding screen mobile phone is in the unfolded state, different applications are displayed in the left and right display areas of the folding screen, and recommended applications can also be opened in the floating window.
  • the original application 3 the application for which the user performs entity recognition trigger operation
  • the recommended application can be opened in the newly created floating window 1708 .
  • the floating window 1707 and the floating window 1708 can be suspended in the left and right display areas of the folding screen mobile phone respectively, as shown in FIG. 17(5).
  • the floating window 1708 and the floating window 1707 can also be suspended in the left display area (or right display area) of the folding screen mobile phone at the same time, and the floating window 1708 partially covers the floating window 1707 .
  • the application scenario shown in FIG. 20 when the folding screen mobile phone is in the unfolded state, different applications are displayed in the left and right display areas and the floating window of the folding screen.
  • the browsing application interface is displayed in the display area 2001
  • the memo application interface is displayed in the area 2002
  • the gallery application interface is displayed in the floating window 2003 .
  • the floating window 2003 is floating on the display area 2001 .
  • the user views pictures in the gallery, triggers the entity recognition of the pictures, and recommends services, and the user chooses the recommended one to open in an APP.
  • the browsing application interface continues to be displayed in the display area 2001
  • the memo application interface continues to be displayed in the display area 2002
  • the gallery application interface continues to be displayed in the floating window 2003
  • a certain APP is used as a recommended
  • the application is displayed in the newly-created floating window 2004.
  • the floating window 2004 is floating on the window 2002 .
  • the floating window 2004 can also be suspended on the display area 2001 together with the floating window 2003 .
  • the floating window 2004 can also be suspended on the floating window 2003 to cover part of the floating window 2003 .
  • the size of the floating window can be adjusted, and the embodiment of the present application does not limit the size of the floating window.
  • the user initiates an entity recognition trigger operation in an application displayed in one of the display areas, triggering the entity recognition application and the computing engine
  • the service completes image entity recognition.
  • the user can long press and drag the entity label, and the entity content corresponding to the entity label will move along with it in the interface.
  • the user can directly drag the entity content to another display area display application.
  • Figures 21a to 21c are schematic diagrams of application scenarios, and describe in detail the process of moving entity content by long-pressing and dragging the entity logo by the user.
  • the left and right display areas of the folding screen respectively display a gallery interface and a memo interface.
  • the user views the picture in the gallery, triggers the entity recognition of the picture, and displays the entity labeling result as shown in Figure 21.
  • Fig. 21a and Fig. 21b when the user presses and drags the entity label 2101 for a long time, the entity content 2102 corresponding to the entity label 2101 moves along with the movement of the user's finger. If the user long-presses the entity label 2101 and drags it to an editable position in the memo interface, the entity content 2102 corresponding to the entity label 2101 will be directly displayed at the editable position.
  • the user in one of the display areas, the user long presses and drags the entity label, and the entity content corresponding to the entity label will move along the interface.
  • the user can directly drag the entity content to another application displayed in the display area.
  • the user operation is simple and convenient, which improves the user experience.
  • This embodiment also provides a computer storage medium, in which computer instructions are stored, and when the computer instructions are run on the electronic device, the electronic device is made to execute the above related method steps to implement the information recommendation method in the above embodiment.
  • This embodiment also provides a computer program product, which, when running on a computer, causes the computer to execute the above-mentioned related steps, so as to realize the information recommendation method in the above-mentioned embodiment.
  • an embodiment of the present application also provides a device, which may specifically be a chip, a component or a module, and the device may include a connected processor and a memory; wherein the memory is used to store computer-executable instructions, and when the device is running, The processor can execute the computer-executed instructions stored in the memory, so that the chip executes the information recommendation method in the above method embodiments.
  • the electronic device such as a folding screen mobile phone
  • computer storage medium such as a folding screen mobile phone
  • computer program product or chip provided in this embodiment is all used to execute the corresponding method provided above. Therefore, the beneficial effects it can achieve can refer to the above The beneficial effects of the corresponding method provided herein will not be repeated here.
  • the disclosed devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of modules or units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be combined or It may be integrated into another device, or some features may be omitted, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.

Abstract

本申请提供了一种信息推荐方法及电子设备。该方法包括:电子设备在图像中识别到预设类型的关键信息时,显示第一图标以提示用户;电子设备在用户点击第一图标时,显示图像中关键信息的标注,并基于用户选择的标注为用户推荐应用程序;在用户选择某个应用程序时,电子设备显示该应用程序的界面,且该界面的内容与用户感兴趣的关键信息是相关的。由此,该方法实现了基于图像中关键信息的服务推荐,提升了用户的使用体验。

Description

信息推荐方法及电子设备
本申请要求于2021年09月24日提交中国国家知识产权局、申请号为202111123937.4、申请名称为“信息推荐方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端设备领域,尤其涉及一种信息推荐方法及电子设备。
背景技术
在日常生活中,用户感兴趣信息随处可见。例如,感兴趣信息可以存在于电子图像、纸质文件中等。然而,用户无法便捷地使用这些感兴趣信息。
发明内容
为了解决上述技术问题,本申请提供一种信息推荐方法及电子设备。在该方法中,电子设备可以对任意形式存在的用户感兴趣信息进行识别,并为用户推荐与其感兴趣信息匹配的服务,以此简化用户操作,使用户可以便捷地使用其感兴趣信息,提升了用户体验。
第一方面,本申请实施例提供一种信息推荐方法。该方法包括:电子设备响应于接收到的第一操作,显示第一界面;其中,在第一界面中显示目标图像以及第一图标,第一图标用于指示在目标图像中识别到预设类型的关键信息;第一操作包括:截图操作,在图库中查看图像的操作;电子设备响应于对第一图标的第二操作,显示第二界面;其中,在第二界面中显示目标图像,以及对关键信息的标注;电子设备响应于对其中一个标注的第三操作,显示第三界面;其中,在第三界面中显示一个或多个应用程序的标识,应用程序是根据与其中一个标注对应的关键信息的信息类型推荐的;电子设备响应于对其中一个应用程序的标识的第四操作,显示第四界面;其中,在第四界面中显示其中一个应用程序的显示界面,显示界面的内容与其中一个标注对应的关键信息相关。这样,电子设备在对图像中存在的用户感兴趣信息进行识别后,将图像中存在的用户感兴趣信息进行标注,并基于用户选择的标注为用户推荐应用程序。当用户选择某个应用程序时,电子设备显示该应用程序的界面,且该界面的内容与用户感兴趣信息是相关的。由此实现了基于图像中用户感兴趣信息的服务推荐,提升了用户的使用体验。
示例性的,第二操作可以是点击操作。
示例性的,第三操作可以是点击操作。
示例性的,第一图标可以称之为实体识别结果查看图标。当用户点击第一图标时,电子设备显示图标的识别结果,也即显示图标中预设类型的关键信息的标注。
示例性的,预设类型的关键信息可以是地址信息,码信息,身份证号信息,手机号 码,英文文本,快递单号,网址,电子邮箱地址,网盘下载链接,口令(如淘口令,抖音口令)信息等。例如,地址信息可以是详细的街道地址,如XX市XX区XX街道XX号,也可以是景点名称(如著名的人文景点或自然景点的名称),也可以是著名的建筑名称,还可以是地图上有对应的感兴趣信息的地址,等等。
根据第一方面,电子设备响应于接收到的第一操作,显示第一界面,包括:电子设备响应于接收到的第一操作,对与第一操作对应的目标图像进行识别;在识别完成且目标图像中存在预设类型的关键信息时,显示第一界面。
示例性的,当第一操作为截图操作时,与第一操作对应的目标图像即为截图图像;当第一操作为在图库中查看图像的操作时,与第一操作对应的目标图像即为在图库中查看的图像。
根据第一方面,或者以上第一方面的任意一种实现方式,电子设备响应于接收到的第一操作,对与第一操作对应的目标图像进行识别,包括:电子设备响应于接收到的第一操作,在目标图像满足实时识别条件时,对目标图像进行识别。这样,电子设备在对目标图像进行识别前,进行了是否满足实时识别条件的判断,只有在目标图像满足实时识别条件时,才执行对目标图像的识别操作,以此避免了非必要的图像识别操作,降低了电子设备的功耗。
根据第一方面,或者以上第一方面的任意一种实现方式,第一操作为截图操作时,目标图像满足实时识别条件,包括:执行截图操作时界面显示的应用程序满足第一识别条件,且用户截图分享习惯满足第二识别条件时,目标图像满足实时识别条件;其中,第一识别条件用于指示存在识别需求的应用程序,第二识别条件用于指示符合实时进行识别的用户操作习惯。在截图的应用场景下,电子设备判断目标图像是否满足实时识别条件,考虑了两方面的因素,一个是截图操作所对应的应用程序,另一个是用户操作习惯。只有这两方面均满足识别条件时,才确定目标图像满足实时识别条件,以此保证了电子设备判断目标图像是否满足实时识别条件的结果的准确性。
示例性的,第一识别条件可以为应用程序属于预设的应用程序集合内。也即,预设的应用集合内的各个应用程序为存在实体识别需求的应用程序。例如,预设应用集合中可以包括通常涉及实体识别的应用程序,如WPS应用程序、PPT应用程序等,预设应用集合中可以不包括音乐应用程序,打车应用程序等不涉及实体识别的应用程序。
示例性的,第二识别条件可以为预设的用户操作习惯集合。也即,预设的用户操作习惯集合内的各个用户操作习惯为符合实时进行实体识别的用户操作习惯。示例性的,预设的用户操作习惯集合可以包括用户截图分享习惯为打开截图后再分享。预设的用户操作习惯集合不包括未打开截图直接分享。若用户截图分享习惯为未打开截图直接分享(如截图后直接上滑分享),则该用户截图分享习惯不满足第二识别条件。
根据第一方面,或者以上第一方面的任意一种实现方式,第一操作为在图库中查看 图像的操作时,目标图像满足实时识别条件,包括:在目标图像为相机拍摄图像的情况下,目标图像的拍摄属性满足第三识别条件时,目标图像满足实时识别条件;在目标图像为截图图像的情况下,执行截图操作时界面显示的应用程序满足第一识别条件时,目标图像满足实时识别条件;其中,第一识别条件用于指示存在识别需求的应用程序,第三识别条件用于指示存在图像识别需求的相机拍摄模式。在图库中查看图像的应用场景下,电子设备判断目标图像是否满足实时识别条件,考虑了图像来源,是来自截图还是来自拍摄。针对来源不同的图像,分别采用不同的识别条件来判断目标图像满足实时识别条件,以此保证了电子设备判断目标图像是否满足实时识别条件的结果的准确性。
示例性的,第一识别条件可以为应用程序属于预设的应用程序集合内。也即,预设的应用集合内的各个应用程序为存在实体识别需求的应用程序。例如,预设应用集合中可以包括通常涉及实体识别的应用程序,如WPS应用程序、PPT应用程序等,预设应用集合中可以不包括音乐应用程序,打车应用程序等不涉及实体识别的应用程序。
示例性的,第三识别条件可以为相机拍摄分类结果属于预设的分类集合内(或称之为相机拍摄模式属于预设的模式集合内)。其中,预设的分类集合内的各个相机拍摄分类为存在实体识别需求的相机拍摄分类。例如,预设分类集合中可以包括通常涉及实体识别的分类,如文档等。预设分类集合中不包括风景、人像等不涉及实体识别的分类。无论是相机拍摄分类结果,还是相机拍摄模式,都可以包括用于进一步分类或标识的分类标签(或称模式标签)。同一种相机拍摄模式,可以包括多种模式标签,有的模式标签指示存在实体识别需求,有的模式标签指示不存在实体识别需求。
根据第一方面,或者以上第一方面的任意一种实现方式,电子设备在响应于接收到的第一操作,显示第一界面之前,还显示第一相机拍摄界面;响应于接收到的拍照操作,将拍照获取到的目标图像存储于图库中,并对目标图像进行识别。电子设备在识别完成且目标图像中存在预设类型的关键信息时,响应于接收到的第一操作,显示第一界面,包括:电子设备响应于在图库中查看目标图像的操作,显示第一界面。这样,在拍摄的应用场景下,电子设备也会对拍照获取的图像进行用户感兴趣信息的识别。由此,用户对其感兴趣信息拍照后,电子设备即可对其感兴趣信息进行识别,并为用户推荐与其感兴趣信息匹配的服务,以此简化用户操作,使用户可以便捷地使用其感兴趣信息,提升了用户体验。
根据第一方面,或者以上第一方面的任意一种实现方式,电子设备对目标图像进行识别,包括:电子设备在目标图像的拍摄属性满足第三识别条件时,对目标图像进行识别;其中,第三识别条件用于指示存在图像识别需求的相机拍摄模式。这样,电子设备对通过用户拍照操作获取的图像进行识别前,首先对图像是否满足识别条件进行判断,只有在图像满足相应的识别条件时,电子设备才会对该图像进行识别,以此避免了非必要的图像识别操作,降低了电子设备的功耗。
根据第一方面,或者以上第一方面的任意一种实现方式,电子设备在对目标图像进 行识别之后,还包括:在识别完成且目标图像中存在预设类型的关键信息时,电子设备显示第二相机拍摄界面;其中,第二相机拍摄界面中还显示第二图标,第二图标用于指示对目标图像已识别完成,且在目标图像中识别到预设类型的关键信息。这样,在拍摄的应用场景下,针对用户前一刻拍照操作获取到的图像完成识别,且图像中存在关键信息时,电子设备会在相机拍摄界面中显示图标,以提示用户其拍摄的图像已完成识别且图像中存在关键信息,可供用户查看。此时,用户即可查看图像的识别结果。第二图标的显示,能够避免在图像未完成识别时,用户过早查看图像识别结果的失落感,从而提升了用户体验。
示例性的,第二图标可以称之为图片实体识别完成标识,用于提示用户对前一刻拍摄到的图像已识别完成,且在图像中识别到预设类型的关键信息。
根据第一方面,或者以上第一方面的任意一种实现方式,电子设备响应于接收到的充电操作,若图库中存在未进行识别的图像,则依次对未进行识别的图像进行识别;电子设备响应于接收到的充电停止操作,若图库中存在未进行识别的图像,则停止对未进行识别的图像进行识别的操作。这样,在电子设备充电的应用场景下,电子设备依次对图库的中未进行识别的图像进行识别,避免由于识别图像产生的功耗给用户正常使用电子设备产生影响。尤其是,在电子设备系统升级的场景下,图库中会存在大量的未进行识别的图像,相比用户正常使用电子设备时对这些图像进行识别,在电子设备充电时对这些图像进行识别,能够避免对用户正常使用电子设备产生影响。
根据第一方面,或者以上第一方面的任意一种实现方式,在第三界面中显示一个应用程序的标识时,应用程序的标识以悬浮球的形式显示;在第三界面中显示多个应用程序的标识时,多个应用程序的标识以列表的形式显示;列表中还显示与其中一个标注对应的关键信息的内容。这样,电子设备为用户推荐应用程序时,视应用程序数量不同而采用不同的形式显示应用程序的标识。同时,电子设备在以列表形式显示应用程序标识时,还一并显示关键信息内容,可以便于用户确认推荐应用所基于的信息是否为用户的感兴趣信息,进而能够保证推荐服务的准确性。
根据第一方面,或者以上第一方面的任意一种实现方式,该方法可以应用于折叠屏手机中,折叠屏呈展开态,包括第一显示区域和第二显示区域。电子设备响应于接收到的第一操作,显示第一界面,包括:电子设备响应于接收到的第一操作,在第一显示区域中显示第一界面;电子设备响应于对第一图标的第二操作,显示第二界面,包括:电子设备响应于对第一图标的第二操作,在第一显示区域中显示第二界面;电子设备响应于对其中一个标注的第三操作,显示第三界面,包括:电子设备响应于对其中一个标注的第三操作,在第一显示区域中显示第三界面;电子设备响应于对其中一个应用程序的标识的第四操作,显示第四界面,包括:电子设备响应于对其中一个应用程序的标识的第四操作,在第二显示区域中显示第四界面;或者,电子设备响应于对其中一个应用程序的标识的第四操作,在第一显示区域上的半屏卡片窗口中显示第四界面。这样,在电 子设备为折叠屏手机的场景下,电子设备可以以不同的形式显示推荐服务的显示界面,从而提升了用户的使用体验。
根据第一方面,或者以上第一方面的任意一种实现方式,电子设备响应于接收到的第一操作,显示第一界面,包括:电子设备响应于接收到的第一操作,在第一悬浮窗口中显示第一界面;电子省响应于对第一图标的第二操作,显示第二界面,包括:电子设备响应于对第一图标的第二操作,在第一悬浮窗口中显示第二界面;电子设备响应于对其中一个标注的第三操作,显示第三界面,包括:电子设备响应于对其中一个标注的第三操作,在第一悬浮窗口中显示第三界面;电子设备响应于对其中一个应用程序的标识的第四操作,显示第四界面,包括:电子设备响应于对其中一个应用程序的标识的第四操作,在第二悬浮窗口中显示第四界面。这样,电子设备可以以不同的形式显示推荐服务的显示界面,从而提升了用户的使用体验。
示例性的,第二悬浮窗口可以与第一悬浮窗口部分重叠,也可以完全不重叠。
示例性的,当电子设备为折叠屏手机时,折叠屏呈展开态,包括第一显示区域和第二显示区域。其中,当第一悬浮窗口显示在第一显示区域上时,第二悬浮窗口可以显示与第一显示区域上,也可以显示于第二显示区域上。
根据第一方面,或者以上第一方面的任意一种实现方式,该方法可以应用于折叠屏手机中,折叠屏呈展开态,包括第一显示区域和第二显示区域。其中,第一显示区域中显示第一应用的显示界面,第二显示区域中显示第二应用的显示界面。电子设备响应于接收到的第一操作,显示第一界面,包括:电子设备响应于接收到的对第一应用的第一操作,在第一显示区域中显示第一界面;电子设备响应于对第一图标的第二操作,显示第二界面,包括:电子设备响应于对第一图标的第二操作,在第一显示区域中显示第二界面。该方法还包括:电子设备响应于对其中一个标注的长按操作及拖动操作,在第一显示区域上显示第三悬浮窗口,第三悬浮窗口移动至第二显示区域上;其中,拖动操作由第一显示区域指向第二显示区域,第三悬浮窗口中显示与其中一个标注对应的关键信息内容;响应于长按操作及拖动操作停止,在第二应用的显示界面中对应的信息编辑处显示关键信息内容。这样,在电子设备为折叠屏手机的应用场景下,用户可以将电子设备在其中一个显示区域中识别到的感兴趣信息拖动到另一个显示区域中显示的应用中使用,简化了用户操作,提升了用户体验。
根据第一方面,或者以上第一方面的任意一种实现方式,当与其中一个标注对应的关键信息的信息类型为码类时,显示界面的内容与其中一个标注对应的关键信息相关,包括:显示界面中显示与关键信息对应的链接界面;当与其中一个标注对应的关键信息的信息类型为字符类时,显示界面的内容与其中一个标注对应的关键信息相关,包括:在显示界面中对应的信息编辑处,显示与其中一个标注对应的关键信息的内容。这样,电子设备为用户推荐服务后显示的服务界面,视感兴趣信息的信息类型不同而采用不同的显示方式,可以是直接在推荐服务中显示并使用感兴趣信息,也可以是直接跳转至与 感兴趣信息匹配的链接界面,实现了从感兴趣信息的识别到服务推荐的闭环,提升了用户的使用体验。
根据第一方面,或者以上第一方面的任意一种实现方式,应用程序是根据与其中一个标注对应的关键信息的信息类型,以及与信息类型对应的默认推荐规则推荐的;或者,应用程序是根据与其中一个标注对应的关键信息的信息类型,以及用户习惯推荐的;或者,应用程序是根据与其中一个标注对应的关键信息的信息类型,用户操作以及用户画像推荐的。这样,电子设备为用户推荐服务时,不仅依据识别到的关键信息的信息类型,还依据由大数据确定的默认推荐规则,或者,还依据用户习惯,或者还依据感知到用户操作以及用户画像,提升了电子设备推荐的应用程序的精准性。
第二方面,本申请实施例提供一种信息推荐方法。该方法应用于折叠屏手机,折叠屏呈展开态,包括第一显示区域和第二显示区域。该方法包括:折叠屏手机在第一显示区域中显示第一界面;其中,第一界面中包括可复制的文本;折叠屏手机响应于在第一界面上接收到的复制操作,在第一显示区域中显示第二界面;其中,在第二界面中显示一个或多个应用程序的标识,应用程序是根据复制的文本对应的信息类型推荐的;折叠屏手机响应于对其中一个应用程序的标识的点击操作,在第一显示区域的半屏卡片窗口中显示第三界面,或者在第二显示区域中显示第三界面;其中,在第三界面中显示其中一个应用程序的显示界面,在显示界面中对应的信息编辑处显示复制的文本。这样,折叠屏手机基于用户复制的文本为用户推荐应用程序。当用户选择某个应用程序时,折叠屏手机显示该应用程序的界面,且该界面对应的信息编辑处会显示复制的文本。由此实现了基于复制文本中用户感兴趣信息的服务推荐,提升了用户的使用体验。而且,折叠屏手机可以以不同的形式显示推荐服务的显示界面,从而提升了用户的使用体验。
根据第二方面,在第二界面中显示一个应用程序的标识时,应用程序的标识以悬浮球的形式显示;在第二界面中显示多个应用程序的标识时,多个应用程序的标识以列表的形式显示;列表中还显示与其中一个标注对应的关键信息的内容。这样,折叠屏手机为用户推荐应用程序时,视应用程序数量不同而采用不同的形式显示应用程序的标识。同时,折叠屏手机在以列表形式显示应用程序标识时,还一并显示关键信息内容,可以便于用户确认推荐应用所基于的信息是否为用户的感兴趣信息,进而能够保证推荐服务的准确性。
根据第二方面,或者以上第二方面的任意一种实现方式,折叠屏手机响应于在第一界面上接收到的复制操作,在第一显示区域中显示第二界面,包括:折叠屏手机响应于在第一界面上接收到的复制操作,对复制的文本进行识别;折叠屏手机在复制的文本属于预设类型的关键信息时,根据复制的文本所属的信息类型推荐一个或多个待定应用;折叠屏手机将复制的文本分别发送至各个待定应用程序的SDK(Software Development Kit,软件工具开发包),并接收待定应用的SDK反馈的确认信息;其中,确认信息用于 指示推荐是否正确;折叠屏手机根据各个待定应用的SDK反馈的确认信息,在一个或多个待定应用中筛选出待显示的应用程序,在第一显示区域中显示第二界面。这样,折叠屏手机推荐的应用程序,是通过相应的应用程序的SDK的二次确认的,进一步提升了推荐的应用程序的准确性。
根据第二方面,或者以上第二方面的任意一种实现方式,应用程序是根据复制的文本对应的信息类型,以及与信息类型对应的默认推荐规则推荐的;或者,应用程序是根据复制的文本对应的信息类型,以及用户习惯推荐的;或者,应用程序是根据复制的文本对应的信息类型,用户操作以及用户画像推荐的。这样,折叠屏手机为用户推荐服务时,不仅依据复制文本的信息类型,还依据由大数据确定的默认推荐规则,或者,还依据用户习惯,或者还依据感知到用户操作以及用户画像,提升了电子设备推荐的应用程序的精准性。
第三方面,本申请实施例提供一种信息推荐方法。该方法包括:折叠屏手机在第一悬浮窗口中显示第一界面;其中,第一界面中包括可复制的文本;折叠屏手机响应于在第一界面上接收到的复制操作,在第一悬浮窗口中显示第二界面;其中,在第二界面中显示一个或多个应用程序的标识,应用程序是根据复制的文本对应的信息类型推荐的;折叠屏手机响应于对其中一个应用程序的标识的点击操作,在第二悬浮窗口中显示第三界面;其中,在第三界面中显示其中一个应用程序的显示界面,在显示界面中对应的信息编辑处显示复制的文本。这样,折叠屏手机基于用户复制的文本为用户推荐应用程序。当用户选择某个应用程序时,折叠屏手机显示该应用程序的界面,且该界面对应的信息编辑处会显示复制的文本。由此实现了基于复制文本中用户感兴趣信息的服务推荐,提升了用户的使用体验。而且,折叠屏手机可以以悬浮窗的形式显示推荐服务的显示界面,从而提升了用户的使用体验。
示例性的,第二悬浮窗口可以与第一悬浮窗口部分重叠,也可以完全不重叠。
示例性的,该方法应用于折叠屏手机,折叠屏呈展开态,包括第一显示区域和第二显示区域。其中,当第一悬浮窗口显示在第一显示区域上时,第二悬浮窗口可以显示与第一显示区域上,也可以显示于第二显示区域上。
根据第三方面,在第二界面中显示一个应用程序的标识时,应用程序的标识以悬浮球的形式显示;在第二界面中显示多个应用程序的标识时,多个应用程序的标识以列表的形式显示;列表中还显示与其中一个标注对应的关键信息的内容。这样,折叠屏手机为用户推荐应用程序时,视应用程序数量不同而采用不同的形式显示应用程序的标识。同时,折叠屏手机在以列表形式显示应用程序标识时,还一并显示关键信息内容,可以便于用户确认推荐应用所基于的信息是否为用户的感兴趣信息,进而能够保证推荐服务的准确性。
根据第三方面,或者以上第三方面的任意一种实现方式,该方法应用于折叠屏手机, 折叠屏呈展开态,包括第一显示区域和第二显示区域。折叠屏手机响应于在第一界面上接收到的复制操作,在第一悬浮窗口中显示第二界面,包括:折叠屏手机响应于在第一界面上接收到的复制操作,对复制的文本进行识别;折叠屏手机在复制的文本属于预设类型的关键信息时,根据复制的文本所属的信息类型推荐一个或多个待定应用;折叠屏手机将复制的文本分别发送至各个待定应用程序的SDK(Software Development Kit,软件工具开发包),并接收待定应用的SDK反馈的确认信息;其中,确认信息用于指示推荐是否正确;折叠屏手机根据各个待定应用的SDK反馈的确认信息,在一个或多个待定应用中筛选出待显示的应用程序,在第一悬浮窗口中显示第二界面。这样,折叠屏手机推荐的应用程序,是通过相应的应用程序的SDK的二次确认的,进一步提升了推荐的应用程序的准确性。
根据第三方面,或者以上第三方面的任意一种实现方式,应用程序是根据复制的文本对应的信息类型,以及与信息类型对应的默认推荐规则推荐的;或者,应用程序是根据复制的文本对应的信息类型,以及用户习惯推荐的;或者,应用程序是根据复制的文本对应的信息类型,用户操作以及用户画像推荐的。这样,折叠屏手机为用户推荐服务时,不仅依据复制文本的信息类型,还依据由大数据确定的默认推荐规则,或者,还依据用户习惯,或者还依据感知到用户操作以及用户画像,提升了电子设备推荐的应用程序的精准性。
第四方面,本申请实施例提供了一种电子设备。该电子设备包括:一个或多个处理器;存储器;以及一个或多个计算机程序,其中一个或多个计算机程序存储在存储器上,当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:
电子设备响应于接收到的第一操作,显示第一界面;其中,在第一界面中显示目标图像以及第一图标,第一图标用于指示在目标图像中识别到预设类型的关键信息;第一操作包括:截图操作,在图库中查看图像的操作;电子设备响应于对第一图标的第二操作,显示第二界面;其中,在第二界面中显示目标图像,以及对关键信息的标注;电子设备响应于对其中一个标注的第三操作,显示第三界面;其中,在第三界面中显示一个或多个应用程序的标识,应用程序是根据与其中一个标注对应的关键信息的信息类型推荐的;电子设备响应于对其中一个应用程序的标识的第四操作,显示第四界面;其中,在第四界面中显示其中一个应用程序的显示界面,显示界面的内容与其中一个标注对应的关键信息相关。
根据第四方面,当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:电子设备响应于接收到的第一操作,对与第一操作对应的目标图像进行识别;电子设备在识别完成且目标图像中存在预设类型的关键信息时,显示第一界面。
根据第四方面,或者以上第四方面的任意一种实现方式,当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:电子设备响应于接收到的第一操作,在 目标图像满足实时识别条件时,对目标图像进行识别。
根据第四方面,或者以上第四方面的任意一种实现方式,第一操作为截图操作时,执行截图操作时界面显示的应用程序满足第一识别条件,且用户截图分享习惯满足第二识别条件时,目标图像满足实时识别条件;其中,第一识别条件用于指示存在识别需求的应用程序,第二识别条件用于指示符合实时进行识别的用户操作习惯。
根据第四方面,或者以上第四方面的任意一种实现方式,第一操作为在图库中查看图像的操作时,在目标图像为相机拍摄图像的情况下,目标图像的拍摄属性满足第三识别条件时,目标图像满足实时识别条件;第一操作为在图库中查看图像的操作时,在目标图像为截图图像的情况下,执行截图操作时界面显示的应用程序满足第一识别条件时,目标图像满足实时识别条件;其中,第一识别条件用于指示存在识别需求的应用程序,第三识别条件用于指示存在图像识别需求的相机拍摄模式。
根据第四方面,或者以上第四方面的任意一种实现方式,当计算机程序被一个或多个处理器执行时,使得电子设备还执行以下步骤:电子设备显示第一相机拍摄界面;电子设备响应于接收到的拍照操作,将拍照获取到的目标图像存储于图库中,并对目标图像进行识别;电子设备响应于在图库中查看目标图像的操作,显示第一界面。
根据第四方面,或者以上第四方面的任意一种实现方式,当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:电子设备在目标图像的拍摄属性满足第三识别条件时,对目标图像进行识别;其中,第三识别条件用于指示存在图像识别需求的相机拍摄模式。
根据第四方面,或者以上第四方面的任意一种实现方式,当计算机程序被一个或多个处理器执行时,使得电子设备还执行以下步骤:电子设备在识别完成且目标图像中存在预设类型的关键信息时,显示第二相机拍摄界面;其中,第二相机拍摄界面中还显示第二图标,第二图标用于指示对目标图像已识别完成,且在目标图像中识别到预设类型的关键信息。
根据第四方面,或者以上第四方面的任意一种实现方式,当计算机程序被一个或多个处理器执行时,使得电子设备还执行以下步骤:电子设备响应于接收到的充电操作,若图库中存在未进行识别的图像,则依次对未进行识别的图像进行识别;电子设备响应于接收到的充电停止操作,若图库中存在未进行识别的图像,则停止对未进行识别的图像进行识别的操作。
根据第四方面,或者以上第四方面的任意一种实现方式,在第三界面中显示一个应用程序的标识时,应用程序的标识以悬浮球的形式显示;在第三界面中显示多个应用程 序的标识时,多个应用程序的标识以列表的形式显示;列表中还显示与其中一个标注对应的关键信息的内容。
根据第四方面,或者以上第四方面的任意一种实现方式,电子设备为折叠屏手机,折叠屏呈展开态,包括第一显示区域和第二显示区域;当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:电子设备响应于接收到的第一操作,在第一显示区域中显示第一界面;电子设备响应于对第一图标的第二操作,在第一显示区域中显示第二界面;电子设备响应于对其中一个标注的第三操作,在第一显示区域中显示第三界面;电子设备响应于对其中一个应用程序的标识的第四操作,在第二显示区域中显示第四界面;或者,电子设备响应于对其中一个应用程序的标识的第四操作,在第一显示区域上的半屏卡片窗口中显示第四界面。
根据第四方面,或者以上第四方面的任意一种实现方式,当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:电子设备响应于接收到的第一操作,在第一悬浮窗口中显示第一界面;电子设备响应于对第一图标的第二操作,在第一悬浮窗口中显示第二界面;电子设备响应于对其中一个标注的第三操作,在第一悬浮窗口中显示第三界面;电子设备响应于对其中一个应用程序的标识的第四操作,在第二悬浮窗口中显示第四界面。
根据第四方面,或者以上第四方面的任意一种实现方式,电子设备为折叠屏手机,折叠屏呈展开态,包括第一显示区域和第二显示区域;第一显示区域中显示第一应用的显示界面,第二显示区域中显示第二应用的显示界面。当计算机程序被一个或多个处理器执行时,使得电子设备执行以下步骤:电子设备响应于接收到的对第一应用的第一操作,在第一显示区域中显示第一界面;电子设备响应于对第一图标的第二操作,在第一显示区域中显示第二界面。当计算机程序被一个或多个处理器执行时,使得电子设备还执行以下步骤:电子设备响应于对其中一个标注的长按操作及拖动操作,在第一显示区域上显示第三悬浮窗口,第三悬浮窗口移动至第二显示区域上;其中,拖动操作由第一显示区域指向第二显示区域,第三悬浮窗口中显示与其中一个标注对应的关键信息内容;电子设备响应于长按操作及拖动操作停止,在第二应用的显示界面中对应的信息编辑处显示关键信息内容。
根据第四方面,或者以上第四方面的任意一种实现方式,当与其中一个标注对应的关键信息的信息类型为码类时,显示界面中显示与关键信息对应的链接界面;当与其中一个标注对应的关键信息的信息类型为字符类时,在显示界面中对应的信息编辑处,显示与其中一个标注对应的关键信息的内容。
根据第四方面,或者以上第四方面的任意一种实现方式,应用程序是根据与其中一个标注对应的关键信息的信息类型,以及与信息类型对应的默认推荐规则推荐的;或者, 应用程序是根据与其中一个标注对应的关键信息的信息类型,以及用户习惯推荐的;或者,应用程序是根据与其中一个标注对应的关键信息的信息类型,用户操作以及用户画像推荐的。
第四方面以及第四方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第四方面以及第四方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第五方面,本申请实施例提供了一种折叠屏手机。该折叠手机的折叠屏呈展开态,包括第一显示区域和第二显示区域;该折叠屏手机包括:一个或多个处理器;存储器;以及一个或多个计算机程序,其中一个或多个计算机程序存储在存储器上,当计算机程序被一个或多个处理器执行时,使得折叠屏手机执行以下步骤:折叠屏手机在第一显示区域中显示第一界面;第一界面中包括可复制的文本;折叠屏手机响应于在第一界面上接收到的复制操作,在第一显示区域中显示第二界面;其中,在第二界面中显示一个或多个应用程序的标识,应用程序是根据复制的文本对应的信息类型推荐的;折叠屏手机响应于对其中一个应用程序的标识的点击操作,在第一显示区域的半屏卡片窗口中显示第三界面,或者在第二显示区域中显示第三界面;其中,在第三界面中显示其中一个应用程序的显示界面,在显示界面中对应的信息编辑处显示复制的文本。
根据第五方面,在第二界面中显示一个应用程序的标识时,应用程序的标识以悬浮球的形式显示;在第二界面中显示多个应用程序的标识时,多个应用程序的标识以列表的形式显示;列表中还显示与其中一个标注对应的关键信息的内容。
根据第五方面,或者以上第五方面的任意一种实现方式,当计算机程序被一个或多个处理器执行时,使得折叠屏手机执行以下步骤:折叠屏手机响应于在第一界面上接收到的复制操作,对复制的文本进行识别;折叠屏手机在复制的文本属于预设类型的关键信息时,根据复制的文本所属的信息类型推荐一个或多个待定应用;折叠屏手机将复制的文本分别发送至各个待定应用程序的软件工具开发包SDK,并接收待定应用的SDK反馈的确认信息;其中,确认信息用于指示推荐是否正确;折叠屏手机根据各个待定应用的SDK反馈的确认信息,在一个或多个待定应用中筛选出待显示的应用程序,在第一显示区域中显示第二界面。
根据第五方面,或者以上第五方面的任意一种实现方式,应用程序是根据复制的文本对应的信息类型,以及与信息类型对应的默认推荐规则推荐的;或者,应用程序是根据复制的文本对应的信息类型,以及用户习惯推荐的;或者,应用程序是根据复制的文本对应的信息类型,用户操作以及用户画像推荐的。
第五方面以及第五方面的任意一种实现方式分别与第二方面以及第二方面的任意一种实现方式相对应。第五方面以及第五方面的任意一种实现方式所对应的技术效果可参见上述第二方面以及第二方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第六方面,本申请实施例提供了一种折叠屏手机。该折叠屏手机包括:一个或多个处理器;存储器;以及一个或多个计算机程序,其中一个或多个计算机程序存储在存储器上,当计算机程序被一个或多个处理器执行时,使得折叠屏手机执行以下步骤:折叠屏手机在第一悬浮窗口中显示第一界面;其中,第一界面中包括可复制的文本;折叠屏手机响应于在第一界面上接收到的复制操作,在第一悬浮窗口中显示第二界面;其中,在第二界面中显示一个或多个应用程序的标识,应用程序是根据复制的文本对应的信息类型推荐的;折叠屏手机响应于对其中一个应用程序的标识的点击操作,在第二悬浮窗口中显示第三界面;其中,在第三界面中显示其中一个应用程序的显示界面,在显示界面中对应的信息编辑处显示复制的文本。
根据第六方面,在第二界面中显示一个应用程序的标识时,应用程序的标识以悬浮球的形式显示;在第二界面中显示多个应用程序的标识时,多个应用程序的标识以列表的形式显示;列表中还显示与其中一个标注对应的关键信息的内容。
根据第六方面,或者以上第六方面的任意一种实现方式,该折叠屏手机的折叠屏呈展开态,包括第一显示区域和第二显示区域。当计算机程序被一个或多个处理器执行时,使得折叠屏手机执行以下步骤:折叠屏手机响应于在第一界面上接收到的复制操作,对复制的文本进行识别;折叠屏手机在复制的文本属于预设类型的关键信息时,根据复制的文本所属的信息类型推荐一个或多个待定应用;折叠屏手机将复制的文本分别发送至各个待定应用程序的SDK,并接收待定应用的SDK反馈的确认信息;其中,确认信息用于指示推荐是否正确;折叠屏手机根据各个待定应用的SDK反馈的确认信息,在一个或多个待定应用中筛选出待显示的应用程序,在第一悬浮窗口中显示第二界面。
根据第六方面,或者以上第六方面的任意一种实现方式,应用程序是根据复制的文本对应的信息类型,以及与信息类型对应的默认推荐规则推荐的;或者,应用程序是根据复制的文本对应的信息类型,以及用户习惯推荐的;或者,应用程序是根据复制的文本对应的信息类型,用户操作以及用户画像推荐的。
第六方面以及第六方面的任意一种实现方式分别与第三方面以及第三方面的任意一种实现方式相对应。第六方面以及第六方面的任意一种实现方式所对应的技术效果可参见上述第三方面以及第三方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第七方面,本申请实施例提供一种计算机可读存储介质。该计算机可读存储介质包括计算机程序,当计算机程序在电子设备上运行时,使得电子设备执行第一方面以及第一方面中任意一项的信息推荐方法。或者,该计算机可读存储介质包括计算机程序,当计算机程序在折叠屏手机上运行时,使得折叠屏手机执行第二方面以及第二方面中任意一项的信息推荐方法,或者使得折叠屏手机执行第三方面以及第三方面中任意一项的信息推荐方法。
第七方面以及第七方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应,或者分别与第二方面以及第二方面的任意一种实现方式相对应,或者分别与第三方面以及第三方面的任意一种实现方式相对应。相应的,第七方面以及第七方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,或者可参见上述第二方面以及第二方面的任意一种实现方式所对应的技术效果,或者可参见上述第三方面以及第三方面的任意一种实现方式所对应的技术效果,此处不再赘述。
附图说明
图1为示例性示出的应用场景示意图;
图2为示例性示出的电子设备的结构示意图;
图3为示例性示出的电子设备的软件结构示意图;
图4为本申请实施例提供的模块交互示意图;
图5为示例性示出的应用场景示意图之一;
图6为示例性示出的应用场景示意图之一;
图7为示例性示出的应用场景示意图之一;
图8为本申请实施例提供的模块交互示意图;
图9为示例性示出的应用场景示意图之一;
图10为本申请实施例提供的模块交互示意图;
图11为本申请实施例提供的判断流程示意图;
图12为本申请实施例提供的模块交互示意图;
图13为示例性示出的应用场景示意图之一;
图14为本申请实施例提供的模块交互示意图;
图15为示例性示出的应用场景示意图之一;
图16为示例性示出的应用场景示意图之一;
图17为示例性示出的窗口显示示意图;
图18为示例性示出的应用场景示意图之一;
图19为示例性示出的应用场景示意图之一;
图20为示例性示出的应用场景示意图之一;
图21a~图21c为示例性示出的应用场景示意图之一。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
本申请实施例的说明书和权利要求书中的术语“第一”和“第二”等是用于区别不同的对象,而不是用于描述对象的特定顺序。例如,第一目标对象和第二目标对象等是用于区别不同的目标对象,而不是用于描述目标对象的特定顺序。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
在本申请实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。例如,多个处理单元是指两个或两个以上的处理单元;多个系统是指两个或两个以上的系统。
如图1所示为示例性示出的一种应用场景示意图。示例性的,在用户打开手机图库查看图片的场景下,参照图1(1),图片101中可能存在用户的感兴趣信息,如图片101中的文字1011和二维码1012等。示例性的,在用户截图手机显示界面的场景下,参照图1(2),截图预览图102中可能存在用户的感兴趣信息,如截图预览图102中的文字1021。以截图预览图102为运单详情截图为例,用户感兴趣的文字1021可以是单号信息、地址信息等。示例性的,在用户拍照场景下,参照图1(3),手机拍摄采集界面103中采集到的实体1031中可能存在用户的感兴趣信息,可以是文字信息,也可以是码信息。以实体1031为身份证为例,用户感兴趣信息可能是身份证号码,也可能是住址信息。示例性的,在用户从浏览界面中选取可复制文字的场景下,参照图1(4),用户在浏览界面104中选取的文字可能为用户感兴趣信息,这些文字通常是可以被直接复制的。需要指出的是,在图1中以平板手机的各种显示界面为例示出应用场景示意图,这些应用场景示意图同样适用于折叠屏手机,平板等,本申请对此不作限定。
如图2所示为电子设备100的结构示意图。可选地,电子设备100可以为终端,也可以称为终端设备,终端可以为蜂窝电话(cellular phone)(包括平板式蜂窝电话和折叠屏式蜂窝电话)或平板电脑(pad)等具有摄像头的设备,本申请不做限定。需要说明的是,电子设备100的结构示意图可以适用于图1中的直板手机,也可以适用于折叠屏手机和平板。应该理解的是,图2所示电子设备100仅是电子设备的一个范例,并且电子设备100可以具有比图中所示的更多的或者更少的部件,可以组合两个或多个的部件,或者可以具有不同的部件配置。图2中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
电子设备100可以包括:处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器,陀螺仪传感器, 气压传感器,磁传感器,加速度传感器,距离传感器,接近光传感器,指纹传感器,温度传感器,触摸传感器,环境光传感器,骨传导传感器等。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏194,N为大于1的正整数。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理,使得电子设备100实现本申请中的信息推荐方法。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以 包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
压力传感器用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器可以设置于显示屏194。压力传感器的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。当有触摸操作作用于显示屏194,电子设备100根据压力传感器检测所述触摸操作强度。电子设备100也可以根据压力传感器的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
电子设备100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本申请实施例以分层架构的Android系统为例,示例性说明电子设备100的软件结构。
图3是本申请实施例的电子设备100的软件结构框图。
电子设备100的分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为三层,从上至下分别为应用程序层,应用程序框架层,以及内核层。
应用程序层可以包括一系列应用程序包。
如图3所示,应用程序包可以包括相机,图库,地图,浏览器,翻译,购物,短消息,备忘录,实体识别应用,计算引擎应用等应用程序。
其中,实体识别应用,用于实现文字信息、码信息等信息的识别,以及展示实体识别结果以及与实体识别结果对应的关联应用的推荐结果;计算引擎应用,用于根据信息识别结果进行用户感兴趣的实体识别,并根据实体识别结果进行关联应用的推荐。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图3所示,应用程序框架层可以包括窗口管理器,感知服务,应用运行管理服务,还可以包括内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕等。
感知服务用于感知应用程序生命周期以及监听用户操作,如复制文本到剪切板的操作,查看图片的操作,截图操作等。
应用运行管理服务用于实现的应用程序层各应用程序的运行管理。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,还可以是以对话窗口形式出现在屏幕上的通知。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,传感器驱动,充电驱动等。
可以理解的是,图3示出的软件结构中的层以及各层中包含的部件,并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的层,以及每个层中可以包括更多或更少的部件,本申请不做限定。
可以理解的是,电子设备为了实现本申请中的信息推荐方法,其包含了执行各个功能相应的硬件和/或软件模块。结合本文中所公开的实施例描述的各示例的算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。本领域技术人员可以结合实施例对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
本申请实施例提供一种信息推荐方法。具体的,本申请实施例中的电子设备可以在用户触发实体识别时进行实体识别,并为用户推荐与其选择的实体相匹配的关联应用,以使用户可以直接打开其感兴趣的关联应用。其中,实体指的是客观存在并可相互区别的事物。在本申请实施例中,实体可以理解为关键信息,或用户感兴趣信息。例如,地址实体可以理解为指示地址的信息。再例如,身份证号实体可以理解为指示身份证号的信息。下述以“实体”这种说法,对本申请实施例进行解释说明。可以理解的是,将“实体”这一说法替换为“关键信息”或“感兴趣信息”,同样可以作为对本申请实施例的解释。
需要说明的是,本申请实施例仅以部分用户感兴趣的实体为例进行说明,例如地址实体,码实体,身份证号实体,手机号码实体,英文文本实体等。示例性的,地址实体 涉及的地址可以是详细的街道地址,如XX市XX区XX街道XX号,也可以是景点名称(如著名的人文景点或自然景点的名称),也可以是著名的建筑名称,还可以是地图上有对应的感兴趣信息的地址,等等。在其他实施例中,本申请的具体实施方式同样可以适用于其他用户感兴趣的实体,例如快递单号实体,网址实体,电子邮箱地址实体,网盘下载链接实体(其中,用户通过网盘下载链接可以获取到存储于网盘中的感兴趣信息),口令实体(如淘口令,抖音口令等)等,本申请不再重复说明。另外,需要指出的是,在本申请实施例中,“图像”和“图片”可以理解为是相同的概念,“应用”和“应用程序”可以理解为是相同的概念。
一个可能的应用场景为:在用户进行查看操作或截图操作的图片中,存在用户感兴趣的实体,此时需要对图片进行实体识别;另一个可能的应用场景为:在用户进行复制操作的文本中,存在用户感兴趣的实体,此时需要对文本进行实体识别。
场景一
在本场景中,以直板手机的图片中可能存在用户感兴趣实体为例,对本申请实施例具体实现方式进行详细说明。
如图4所示为各模块的交互流程示意图,参照图4,本申请实施例提供的信息推荐方法的流程,具体包括:
S401,感知服务接收到实体识别触发操作。
实体识别触发操作,指的是可以触发手机执行信息推荐方法的操作,具体可以是触发感知服务发送实体识别指令的用户操作,以使实体识别应用和计算引擎应用共同完成实体识别并确定待推荐的关联应用。在本实施例中,实体识别触发操作可选地为针对图片的用户操作,例如可以是图片查看操作、截图操作等。其中,图片查看操作可以是在图库中查看图片的操作,也可以是在相机拍摄界面查看已拍图像的操作。
S402,感知服务向实体识别应用发送实体识别指令。
在本实施例中,实体识别指令可以用于指示实体识别应用针对图片进行识别,具体可以是针对图片进行文字识别和码(如二维码)识别,以使实体识别应用结合计算引擎应用,共同完成实体识别及确定待推荐的关联应用。其中,实体识别指令指示进行识别的图片,指的是与实体识别触发操作对应的图片,可以是用户在图库中打开的图片,可以在相机拍摄界面查看的已拍图像,还可以是用户截图操作得到的截图图像等。可选地,实体识别指令中包括但不限于待识别的图片信息。
S403,实体识别应用对图片进行文本识别及码识别,并向计算引擎应用发送文本识别及码识别结果。
实体识别应用接收到实体识别指令,根据实体识别指令确定待识别的图片,对图片进行文本识别及码识别。
可选的,实体识别应用在对图片进行文本识别以及码识别时,可以调用OCR(Optical Character Recognition,光学字符识别)应用完成识别。可选的,实体识别应用调用的OCR应用可以安装于手机端。这样,实体识别应用无需从云端调用OCR应用完成识别,提高了数据安全性,能够消除用户对数据安全的顾虑。
实体识别应用在对图片完成文本识别及码识别之后,将文本识别结果及码识别结果发送至计算引擎应用。其中,文本识别结果可以包括但不限于是否存在文本,以及识别到的文本信息;码识别结果可以包括但不限于是否存在二维码,以及识别到的码信息。
S404,计算引擎应用根据文本识别结果及码识别结果进行实体识别,并向实体识别应用发送实体识别结果。
计算引擎应用接收到实体识别应用发送的文本识别结果及码识别结果之后,根据文本识别结果及码识别结果进行实体识别,确定用户可能感兴趣的一种或多种实体。
可选的,计算引擎应用获取预设的用户可能感兴趣的多种实体类型,并根据这多种实体类型去识别图片中包括的一种或多种实体。其中,预设的用户可能感兴趣的实体类型包括但不限于:电话实体,地址实体,快递单号实体,身份证号实体,二维码实体,网址实体,电子邮箱实体,口令实体,语种文字实体等。示例性的,地址实体中可以包括但不限于国家、省、市、区、街道(路)、号等信息。示例性的,语种文字实体,指的是与一个语种对应的文字,如英文文字实体、韩文文字实体、中文文字实体等。
示例性的,计算引擎应用可以调用NLU(Natural Language Understanding,自然语言理解)应用实现实体识别。可选的,计算引擎应用调用的NLU应用可以安装于手机端。这样,计算引擎应用无需从云端调用NLU应用完成实体识别,提高了数据安全性,能够消除用户对数据安全的顾虑。
计算引擎应用在完成实体识别结果之后,将实体识别结果发送至实体识别应用。其中,实体识别结果中包括但不限于:实体内容,以及与实体内容对应的实体类型。需要指出的是,实体识别结果中包括在图片中识别到的所有实体内容,以及与每个实体内容对应的实体类型。
S405,实体识别应用在第一图层蒙版中添加实体识别结果查看图标,并展示第一图层蒙版。
实体识别结果查看图标,用于指示图像中识别到预设类型的实体(或称关键信息)。
实体识别应用接收到计算引擎应用发送的实体识别结果后,确定图片已完成实体识别,且识别到预设类型的实体,则在手机当前显示界面上添加第一图层蒙版,并在第一图层蒙版中(可以是任意位置处)添加实体识别结果查看图标。其中,实体识别结果查看图标是可供用户进行点击的,以使用户可以查看到图片的实体识别结果。需要注意的是,第一图层蒙版中仅包括实体识别结果查看图标。通过图层蒙版添加实体识别结果查看图标,不会对图片造成损坏。
可选的,实体识别应用在第一图层蒙版的任意一个角处,如右下角处或左下角处等,添加实体识别结果查看图标。在实体识别应用展示第一图层蒙版时,用户即可在手机当前显示界面上查看到实体识别结果查看图标,例如可以是在当前查看图片上查看到实体识别结果查看图标,或者可以是在截图预览显示界面上查看到实体识别结果查看图标。
S406,感知服务接收到点击实体识别结果查看图标的操作。
用户可以通过点击(如单击或双击)实体识别结果查看图标的操作,触发图片的实体识别结果的显示。用户也可以通过针对实体识别结果查看图标的其它触发操作,如长按操作,触发图片的实体识别结果的显示,本实施例不做限定。
S407,感知服务向实体识别应用发送实体标注指令。
在本实施例中,实体标注指令可以用于指示实体识别应用针对图片进行实体标注,实体标注结果可以向用户展示图片的实体识别情况。
S408,实体识别应用在第二图层蒙版中标注实体识别结果,并展示第二图层蒙版。
实体识别应用接收到实体标注指令后,获取图片的实体识别结果,在手机当前显示界面上添加第二图层蒙版,并在第二图层蒙版中对图片的实体识别结果进行标注。其中,标注是可供用户进行点击的,以使用户可以选择其实际感兴趣的一个实体。可选的,实体识别应用获取的实体识别结果中,除了包括在图片中识别到的所有实体内容,以及与每个实体内容对应的实体类型,还包括每个实体内容在图片或截图预览图像中的位置信息,例如可以是坐标信息。
实体识别应用在第二图层蒙版中,根据实体内容在图片或截图预览图像中的位置信息,在相应的实体内容出进行标注。可选的,实体识别应用在第二图层蒙版中,根据实体内容在图片或截图预览图像中的位置信息,以及与实体类型匹配的标注方式,在相应的实体内容出进行标注。示例性的,标注方式可以分为两种,一种针对文本类实体,如电话号码实体,地址实体等,一种针对码类实体,如二维码实体。例如,针对文本类实体,实体识别应用可以采用下划线标注的方式,也即在文本类实体内容下方进行划线;针对码类实体,实体识别应用可以采用点标注的方式,也即在码类实体内容上添加点标识。需要注意的是,第二图层蒙版中仅包括图片的实体标注。同样的,通过图层蒙版添加实体标注,不会对图片造成损坏。
实体识别应用在第二图层蒙版中完成实体识别结果标注后,展示第二图层蒙版,同时取消第一图层蒙版的展示。此时,用户在当前显示界面上查看到图片的一个或多个实体标注。
S409,感知服务接收到点击实体标注的操作。
用户可以通过点击(如单击或双击)任意一个实体标注的操作,触发与该实体标注(或者称实体标注对应的实体内容)匹配的推荐服务的显示。用户也可以通过针对某个实体标注的其它触发操作,如长按操作,触发与该实体标注匹配的推荐服务的显示,本实施例不做限定。
S410,感知服务向计算引擎应用发送推荐应用显示指令。
在本实施例中,推荐应用显示指令可以用于指示计算引擎应用进行相关应用推荐,以及结合实体识别应用进行相关应用的显示。其中,显示的相关应用可供用户进行点击,以实现相关应用的功能开启。可选的,推荐应用显示指令中包括但不限于:用户点击的实体标注对应的实体类型和实体内容。
S411,计算引擎应用推荐与点击实体标注匹配的关联应用,并将关联应用信息发送至实体识别应用。
计算引擎应用接收到推荐应用显示指令,解析推荐应用显示指令,确定关联应用待推荐的实体类型,并推荐与该实体类型匹配的关联应用(也可称关联服务),也即推荐与点击实体标注匹配的关联应用。以实体类型为地址实体为例,计算引擎应用推荐的与地址实体对应的关联服务包括但不限于:在地图中显示,获取路线,复制,添加到通讯 录,添加到备忘录,分享等。可选的,在计算引擎应用推荐的关联服务包括多种时,计算引擎应用可以为这多种关联服务分别设置优先级。
可选的,计算引擎应用在推荐与实体类型对应的关联服务时,可以根据与实体类型对应的默认推荐规则,推荐与实体类型对应的关联服务,以及设置各种关联服务的优先级。其中,默认推荐规则可以是结合大数据分析结果确定的。例如,计算引擎应用获取用户日常获取某实体类型的信息之后的各种用户意图,以及各种用户意图的优先级,这些用户意图可以是云端结合大数据统计确定的。示例性的,以电话号码实体类型为例,用户日常获取到电话号码后的用户意图包括但不限于:拨打电话,发送短信,添加到通讯录,复制,分享等,则可以根据这些用户意图推荐与电话号码实体类型对应的关联服务。假设,在这些用户意图中,拨打电话的频率最高,则可以将拨打电话这种关联服务的优先级设置为最高优先级。
示例性的,计算引擎应用在推荐与实体类型对应的关联服务时,若确定手机中安装了与同种关联服务对应的多个应用,则可以同时推荐这多个应用。例如,计算引擎应用在推荐地图服务时,若确定手机中安装有A地图和B地图这两个应用,则可以同时推荐这两个地图应用。
可选的,计算引擎应用在推荐与实体类型对应的关联服务时,还可以结合用户使用习惯,推荐与实体类型对应的关联服务,以及设置各种关联服务的优先级。也就是说,计算引擎应用学习用户操作手机的习惯,并根据学习结果推荐与实体类型对应的关联服务。示例性的,以地址实体类型为例,假设用户在获取地址后,一直是查询地址或分享地址,从未将地址添加到通讯录联系人。计算引擎应用学习到该用户操作习惯,可以将查询地址以及分享地址,作为推荐的与地址实体对应的关联服务,并将查询地址以及分享地址这两种关联服务的优先级设置为最高优先级。
可选的,计算引擎应用在推荐与实体类型分别对应的关联服务时,还可以结合感知服务获取到的用户操作以及用户画像,推荐与实体类型对应的关联服务,以及设置各种关联服务的优先级。示例性的,某天用户日程中存在一个会议安排,该日程信息中包括会议时间和会议详情,但不包括会议地址。感知服务感知到在该会议开始前(如一个小时前)查看了在日程中截图保存的会议详情截图,并在会议详情截图的实体识别结果中点击了与会议地址对应的实体标注。假设,感知服务获取到的用户画像为:用户出行从未开车,而是打车出行或乘公共交通工具出行。感知服务将获取到的用户操作以及用户画像发送给计算引擎应用,计算引擎应用即可结合用户操作及用户画像进行服务推荐。此时,计算引擎应用可以大概率地判断出用户希望打车出行至该会议地址,进而可以推荐与该会议地址对应的关联服务为打车服务,并推荐将该会议地址作为打车服务的目的地址。
计算引擎应用在完成与点击实体标注匹配的关联应用推荐之后,将相应的关联应用信息发送至实体识别应用。可选的,关联应用信息包括但不限于:关联应用的名称,以及关联应用的推荐优先级。
S412,实体识别应用显示推荐应用列表卡片或推荐应用图标。
实体识别应用接收到计算引擎应用发送的关联应用信息,即可将关联应用信息进行 显示。示例性的,实体识别应用显示关联应用的标识,例如可以是图标和功能简述。
可选的,当计算引擎应用发送的推荐应用的数量为一个时,实体识别应用可以直接获取该推荐应用的图标,并显示该图标,以供用户点击开启应用;当计算引擎应用发送的推荐应用的数量为多个时,实体识别应用可以在获取这多个推荐应用的图标后,将这些图标以列表卡片的形式进行显示,以供用户从中选择一个推荐用点击开启。
示例性的,实体识别应用显示推荐应用的图标时,还可以一并对应显示推荐应用的名称或应用功能简述(如“在地图中打开”,“添加至备忘录”等)。
示例性的,实体识别应用在将多个推荐应用的图标以列表卡片的形式进行显示时,还可以在其中一行(例如可以是首行)中,显示与用户点击的实体标注匹配的实体内容,以便于用户确认实体识别结果的准确性。
S413,应用运行管理服务响应于针对推荐应用的点击操作,向推荐应用发送应用开启指令。
当实体识别应用唯一显示的推荐应用符合用户意图时,用户可以点击该推荐应用的图标,以开启该推荐应用;当实体识别应用显示的推荐应用列表卡片中,包括符合用户意图的推荐应用时,用户可以点击该推荐应用的图标,以开启该推荐应用。
感知服务接收到用户针对推荐应用的点击操作,向应用运行管理服务发送针对推荐应用的应用开启指示。应用运行管理服务接收到该应用开启指示后,向相应的推荐应用发送应用开启指令。
S414,推荐应用开启,并向应用运行管理服务发送应用已开启的指示信息。
推荐应用接收到应用开启指令之后,执行应用开启操作,并在启动完成后,向应用运行管理服务发送应用已开启的指示信息。
S415,应用运行管理服务向实体识别应用发送推荐应用已开启的指示信息。
应用运行管理服务接收到推荐应用已开启的指示信息,将该指示信息发送至实体识别应用。
S416,实体识别应用向推荐应用发送与点击实体标注相匹配的实体内容。
实体识别应用接收到推荐应用已开启的指示信息之后,确认推荐应用已开启,随即可以向推荐应用发送与点击实体标注相匹配的实体内容,例如地址信息,电话号码信息等。
S417,推荐应用根据实体内容实现相应的应用功能。
推荐应用接收到与点击实体标注相匹配的实体内容,将该实体内容添加至匹配的信息编辑处,进而可以基于该实体内容实现推荐应用相应的功能。其中,推荐应用基于该实体内容实现推荐应用相应的功能,可以是直接实现相应的功能,也可以是在用户的相关操作下实现相应的功能。以实体内容为电话号码、推荐应用为电话为例,电话接收到电话号码,并将其添加至拨打号码编辑处,随即可以基于该电话号码直接实现电话拨打功能。以实体内容为地址信息、推荐应用为打车应用为例,打车应用接收到地址信息,并将其添加在目的地编辑处,此时,打车应用的出发地编辑处可以默认当前位置,用户点击确认既可以实现打车应用的打车功能。
本申请实施例,在用户的动作触发下,手机对与该动作对应的图片进行实体识别, 并在实体识别完成时显示实体识别结果查看图标,以供用户点击查看图片的实体识别结果。若用户点击图片中某个实体的标注信息,手机即可为用户推荐符合用户意图的相关服务,以供用户选择,由此提高了用户的使用体验。
下面结合如图5所示的应用场景,也即用户在手机端截图的应用场景,对本申请实施例提供的信息推荐方法进行解释说明。参照图5(1),用户双击手机屏幕对运单详情界面进行截图。手机的感知服务接收到用户的截图操作,响应于用户的操作行为,手机的感知服务向实体识别应用发送实体识别指令,以使实体识别应用结合计算引擎应用完成对截图图像的实体识别操作。当实体识别完成之前,截图预览界面可以参照图5(2);当实体识别完成之后,实体识别应用展示实体识别结果查看图标,此时预览界面可以参照图5(3)。在图5(3)中,实体识别结果查看图标501示例性地显示在手机显示界面右下角。如图5(3)所示,用户点击实体识别结果查看图标501。手机的感知服务接收到用户的点击操作,响应于用户的操作行为,手机的感知服务向实体识别应用发送实体标注指令,以使实体识别应用完成实体标注操作。参照图5(4),实体识别应用根据实体识别结果进行实体标注操作,由于实体识别结果中涉及的实体均属于文本类实体,可以使用标注(如下划线)502对实体识别结果中涉及的实体进行标注。在图5(4)中,标注的实体包括快递单号实体、电话号码实体和地址实体。假设,用户感兴趣的实体为使用标注5021进行标注的地址实体,用户可以点击该标注,也即点击标注5021。手机的感知服务接收到用户的点击操作,响应于用户的操作行为,向计算引擎应用发送推荐应用显示指令,以使计算引擎应用结合实体识别应用完成推荐相关服务的操作。参照图5(5),当推荐应用为多个时,实体识别应用可以为用户展示推荐应用列表卡片503。在推荐应用列表卡片503中,每行显示一个推荐应用的图标以及应用功能简述,这些推荐应用可以按照优先级由高到低的顺序排列。示例性的,在推荐应用列表卡片的每行中,左侧显示推荐应用的图标,右侧显示应用功能简述。可选的,如图5(5)所示,在推荐应用列表卡片503中,首行5031还可以显示与用户点击的实体标注匹配的实体内容,如识别到的地址信息,以供用户核实是否为其感兴趣信息。示例性的,推荐应用列表卡片首行中,实体内容的左侧显示一个图标,用于指示本行为识别到的实体内容。当推荐应用列表卡片503中,包括符合用户意图的推荐应用时,如“获取路线”,用户可以点击该推荐应用的图标5032,以针对推荐应用列表卡片503的首行5031中的地址进行路线获取操作。在图5(6)中,地图应用已开启,且其目的地址编辑处504填充的地址信息为推荐应用列表卡片503的首行5031中的地址,也即与用户点击的标注5021匹配的实体内容。路线可以参照图5(6)中的路线。
下面结合如图6所示的应用场景,也即用户在图库中查看图片的应用场景,对本申请实施例提供的信息推荐方法进行解释说明。参照图6(1),用户点击手机图库中的图片601查看该图片。手机的感知服务接收到用户的图片查看操作,响应于用户的操作行为,手机的感知服务向实体识别应用发送实体识别指令,以使实体识别应用结合计算引擎应用完成对用户查看图片的实体识别操作。当实体识别完成之后,实体识别应用展示实体识别结果查看图标,此时图片显示界面可以参照图6(2)。在图6(2)中,实体识 别结果查看图标602示例性地显示在手机显示界面右下角。如图6(2)所示,用户点击实体识别结果查看图标602。手机的感知服务接收到用户的点击操作,响应于用户的操作行为,手机的感知服务向实体识别应用发送实体标注指令,以使实体识别应用完成实体标注操作。参照图6(3),实体识别应用根据实体识别结果进行实体标注操作,由于实体识别结果中涉及的实体码类实体,可以使用标注(如点)603对实体识别结果中涉及的实体(也即二维码)进行标注。假设,用户感兴趣的实体为使用标注603进行标注的实体,用户可以点击该标注,手机的感知服务接收到用户的点击操作,响应于用户的操作行为,向计算引擎应用发送推荐应用显示指令,以使计算引擎应用结合实体识别应用完成推荐相关服务的操作。参照图6(4),当推荐应用为多个时,实体识别应用可以为用户展示推荐应用列表卡片604。在推荐应用列表卡片604中,每行显示一个推荐应用的图标以及应用功能简述,这些推荐应用可以按照优先级由高到低的顺序排列。可选的,如图6(4)所示,若二维码对应于某APP(Application,应用程序),则可以在推荐应用列表卡片604首行6041中显示二维码的实体识别结果,如链接信息。示例性的,推荐应用列表卡片首行中,实体识别结果的左侧显示一个图标,用于指示本行为实体识别结果。当推荐应用列表卡片604中,包括符合用户意图的推荐应用时,如“在APP中打开”,用户可以点击推荐应用列表卡片604中相应一行中的APP图标6042,以实现在该APP中打开识别到的二维码,二维码的识别结果示例性地可以参照图6(5)。
下面结合如图7所示的应用场景,也即用户在手机端拍照的应用场景,对本申请实施例提供的信息推荐方法进行解释说明。参照图7(1),用户点击手机拍照界面上的拍照图标701完成拍照操作。在手机拍照界面上,用户若要查看已拍图片,可以点击如图7(2)所示的拍照界面中的已拍图片查看图标702。手机的感知服务接收到用户的已拍图片查看操作,响应于用户的操作行为,手机的感知服务向实体识别应用发送实体识别指令,以使实体识别应用结合计算引擎应用完成对已拍图片的实体识别操作。当实体识别完成之后,实体识别应用展示实体识别结果查看图标,此时已拍图片的查看界面可以参照图7(3)。在图7(3)中,实体识别结果查看图标703示例性地显示在手机显示界面右下角。参照图7(3),用户点击实体识别结果查看图标703。手机的感知服务接收到用户的点击操作,响应于用户的操作行为,手机的感知服务向实体识别应用发送实体标注指令,以使实体识别应用完成实体标注操作。参照图7(4),实体识别应用根据实体识别结果进行实体标注操作,由于实体识别结果中涉及的实体均属于文本类实体,可以使用标注(如下划线)704对实体识别结果中涉及的实体进行标注。在图7(4)中,标注的实体包括身份证号实体和地址实体。假设,用户感兴趣的实体为使用标注7041进行标注的身份证号实体,用户可以点击该标注,也即点击标注7041。手机的感知服务接收到用户的点击操作,响应于用户的操作行为,向计算引擎应用发送推荐应用显示指令,以使计算引擎应用结合实体识别应用完成推荐相关服务的操作。参照图7(5),当推荐应用为多个时,实体识别应用可以为用户展示推荐应用列表卡片705。在推荐应用列表卡片705中,每行显示一个推荐应用的图标以及应用功能简述,这些推荐应用可以按照优先级由高到低的顺序排列。可选的,如图7(5)所示,在推荐应用列表卡片705中,首 行7051还可以显示与用户点击的实体标注匹配的实体内容,如识别到身份证号,以供用户核实是否为其感兴趣信息。示例性的,推荐应用列表卡片首行中,实体识别结果的左侧显示一个图标,用于指示本行为实体识别结果。当推荐应用列表卡片705中,包括符合用户意图的推荐应用时,如“添加到备忘录”,用户可以点击该推荐应用的图标7052,以实现将推荐应用列表卡片705的首行7051中的身份证号添加至备忘录中,参照图7(6)。在图7(6)中,备忘录已开启,且备忘录编辑界面中填充的身份证号,为推荐应用列表卡片705的首行7051中的地址,也即与用户点击的标注7041匹配的实体内容。
在上述各应用场景中,可选的,用户点击推荐应用列表卡片中与推荐应用的图标对应的应用功能简述,可以实现与点击推荐应用的图标同样的效果。
下面再结合如图7所示的应用场景,也即用户在手机端拍照的应用场景,对本申请实施例提供的信息推荐方法进行解释说明。参照图7(1),用户点击手机拍照界面上的拍照图标701完成拍照操作。手机的感知服务接收到用户的拍照操作,响应于用户的操作行为,手机的感知服务向实体识别应用发送实体识别指令,以使实体识别应用结合计算引擎应用完成对已拍图片的实体识别操作。在手机拍照界面上,用户若要查看已拍图片,可以点击如图7(2)所示的拍照界面中的已拍图片查看图标702。此时,若实体识别完成且图片中存在预设的实体类型,实体识别应用展示实体识别结果查看图标,此时已拍图片的查看界面可以参照图7(3)。在图7(3)中,实体识别结果查看图标703示例性地显示在手机显示界面右下角。参照图7(3),用户点击实体识别结果查看图标703。手机的感知服务接收到用户的点击操作,响应于用户的操作行为,手机的感知服务向实体识别应用发送实体标注指令,以使实体识别应用完成实体标注操作。关于图7(4)~图7(6)的解释说明可以参见前述,在此不再赘述。
在拍照操作触发实体识别的应用场景中,考虑为给予用户更好的使用体验,本申请实施例具体实现时可以通过图片实体识别完成标识,向用户提示刚才拍摄的图片的实体识别已完成。本实施例能够避免用户过早点击查看已拍照片,而无法查看到图片实体识别结果的现象,以此提升了用户体验。
如图8所示为各模块的交互流程示意图,参照图8,本申请实施例提供的信息推荐方法的流程,具体包括:
S801,感知服务接收到拍照操作。
S802,感知服务向实体识别应用发送实体识别指令。
可选的,感知服务在确定图片满足实体识别条件时,向实体识别应用发送实体识别指令。此时,该实体识别条件可选地为下述的第三识别条件,在此不再赘述。
S803,实体识别应用对图片进行文本识别及码识别,并向计算引擎应用发送文本识别及码识别结果。
S804,计算引擎应用根据文本识别结果及码识别结果进行实体识别,并向实体识别应用发送实体识别结果。
S805,实体识别应用在第一图层蒙版中添加实体识别结果查看图标,并展示第一图层蒙版。
S806,实体识别应用在拍照界面中添加图片实体识别完成标识。
图片实体识别完成标识,用于指示对用户前一刻拍摄的图像已识别完成,且在该图像中识别到预设类型的实体(或称关键信息)。
在针对刚才已拍图片完成实体识别时,实体识别应用可以在拍照界面中(可以是任意位置处)添加图片实体识别完成标识,以向用户提示刚才已拍图片完成实体识别。可选的,参照图9,在拍照界面中的已拍图片查看图标702处,实体识别应用可以添加图片实体识别完成标识707。此时,用户点击已拍图片查看图标702,手机界面跳转至如图7(3)所示。用户点击图7(3)中实体识别结果查看图标703之后流程,可以参见前述相关解释,在此不再赘述。
在本申请实施例提供的信息推荐方法的流程中,S806之后的流程部分可以参见如图4所示的S406~S417,本实施例相关解释可以参见前述实施例,在此不再赘述。
在本场景一中,考虑到对图片进行实体识别的手机功耗问题,本申请实施例具体实现时需要对图片实体识别的触发时机进行分析,尽量避免在不必要时进行图片实体识别的问题,以降低手机功耗。
如图10所示为各模块的交互流程示意图,参照图10,本申请实施例提供的信息推荐方法的流程,具体包括:
S1001,感知服务接收到实体识别触发操作。
S1002,感知服务在确定图片满足实体识别条件时,向实体识别应用发送实体识别指令。
在本实施例,考虑到手机功耗问题,当感知服务接收到实体识别触发操作时,不再直接向实体识别应用发送实体识别指令,而是判断与实体识别触发操作对应的图片是否满足实体识别条件,若满足则向实体识别应用发送实体识别指令,否则不向实体识别应用发送实体识别指令。
其中,用户的实体识别触发操作的类型不同,相应的实体识别条件可以不同。可选的,实体识别触发操作的类型可以分为两类,一类为截图操作,一类为图片查看操作。示例性的,与截图操作相关的实体识别条件,可以包括但不限于:与截图操作对应的应用信息相关的第一识别条件,以及与用户截图分享习惯相关的第二识别条件。示例性的,与图片查看操作相关的实体识别条件,可以包括但不限于:与相机拍摄分类结果(或称相机拍摄模式)相关的第三识别条件,以及与截图操作对应的应用信息相关的第一识别条件。
示例性的,第一识别条件可以为应用程序属于预设的应用程序集合内。第一识别条件用于指示存在实体识别需求的应用程序。也即,预设的应用集合内的各个应用程序为存在实体识别需求的应用程序。例如,预设应用集合中可以包括通常涉及实体识别的应用程序,如WPS应用程序、PPT应用程序等,预设应用集合中可以不包括音乐应用程序,打车应用程序等不涉及实体识别的应用程序。
示例性的,第二识别条件可以为预设的用户操作习惯集合。第二识别条件用于指示符合实时进行实体识别的用户操作习惯。也即,预设的用户操作习惯集合内的各个用户 操作习惯为符合实时进行实体识别的用户操作习惯。示例性的,预设的用户操作习惯集合可以包括用户截图分享习惯为打开截图后再分享。预设的用户操作习惯集合不包括未打开截图直接分享。若用户截图分享习惯为未打开截图直接分享(如截图后直接上滑分享),则该用户截图分享习惯不满足第二识别条件。
示例性的,第三识别条件可以为相机拍摄分类结果属于预设的分类集合内(或称之为相机拍摄模式属于预设的模式集合内)。该第三识别条件用于指示存在实体识别需求的相机拍摄分类或相机拍摄模式。其中,预设的分类集合内的各个相机拍摄分类为存在实体识别需求的相机拍摄分类。例如,预设分类集合中可以包括通常涉及实体识别的分类,如文档等。预设分类集合中不包括风景、人像等不涉及实体识别的分类。无论是相机拍摄分类结果,还是相机拍摄模式,都可以包括用于进一步分类或标识的分类标签(或称模式标签)。同一种相机拍摄模式,可以包括多种模式标签,有的模式标签指示存在实体识别需求,有的模式标签指示不存在实体识别需求。
如图11所示为感知服务的判断流程示意图,参照图11,感知服务针对图片是否满足实体识别条件的判断流程,具体包括:
S1101,感知服务接收实体识别触发操作,并确定操作类型。若操作类型为截图操作,感知服务则执行S1102;若操作类型为图片查看操作,感知服务则执行S1105。
S1102,感知服务获取截图操作对应的应用信息。
用户执行截图的操作时,手机的显示界面显示的应用,即为截图操作对应的应用。可选的,应用信息包括但不限于应用名称。
S1103,感知服务判断应用信息是否满足实体识别条件,若是则执行S1104,若否则执行S1108。
在本步骤中,实体识别条件可以是指上述第一识别条件。感知服务判断应用信息是否满足上述第一识别条件,是则进行其他实体识别条件的判断,否则确定图片不满足实体识别条件。
S1104,感知服务判断用户截图分享习惯是否满足截图操作触发识别条件,若是则执行S1109,若否则执行S1101。
在本步骤中,截图操作触发识别条件可以是指上述第二识别条件。感知服务获取用户截图分享习惯,并判断该用户截图分享习惯是否满足上述第二识别条件。若是,感知服务确定图片满足实体识别条件,进而可以向实体识别应用发送实体识别指令。若否,感知服务确定图片不满足实体识别条件。此时,感知服务需要感知用户的下一步操作是否为查看截图,并重新执行图片是否满足实体识别条件的判断流程。
S1105,感知服务确定图片来源。若图片来源为相机拍摄,感知服务则执行S1106;若图片来源为应用截图,感知服务则执行S1102。
用户执行图片查看操作时,感知服务需要确定图片的来源。可选的,感知服务可以在图片的属性信息中确定其图片来源。
S1106,感知服务获取相机拍摄分类结果。
在感知服务确定图片来源为相机拍摄时,感知服务还需要进一步获取图片的相机拍摄分类结果。可选的,感知服务可以在图片的属性信息中,确定图片的相机拍摄分类结 果。
S1107,感知服务判断相机拍摄分类结果是否满足实体识别条件,若是则执行S1109,若否则执行S1108。
在本步骤中,实体识别条件可以是指上述第三识别条件。感知服务获取图片的相机拍摄分类结果,并判断该相机拍摄分类结果是否满足上述第三识别条件。若是,感知服务确定图片满足实体识别条件,进而可以向实体识别应用发送实体识别指令。若否,感知服务确定图片不满足实体识别条件。
类似的,感知服务获取图片的相机拍摄模式,通过判断相机拍摄模式是否满足实体识别条件,来判断图标是否满足实体识别条件。相应的,实体识别条件可以是指上述第三识别条件。感知服务获取图片的拍摄模式,并判断该拍摄模式是否满足相应的第三识别条件。若是,感知服务确定图片满足实体识别条件,进而可以向实体识别应用发送实体识别指令。若否,感知服务确定图片不满足实体识别条件。
需要指出的是,通过相机拍摄存储于图库中的图片,其属性信息中可以包括多种标识,例如相机拍摄分类结果(或称相机拍摄模式),以及一个或多个分类标签(或称模式标签)等。进而,感知服务在判断图片是够满足实体识别条件时,可以通过判断其属性信息中包括的各种标识是否满足实体识别条件来确定。
S1108,感知服务确定图片不满足实体识别条件。
S1109,感知服务确定图片满足实体识别条件。
S1003,实体识别应用对图片进行文本识别及码识别,并向计算引擎应用发送文本识别及码识别结果。
S1004,计算引擎应用根据文本识别结果及码识别结果进行实体识别,并向实体识别应用发送实体识别结果。
S1005,实体识别应用在第一图层蒙版中添加实体识别结果查看图标,并展示第一图层蒙版。
在本申请实施例提供的信息推荐方法的流程中,S1005之后的流程部分可以参见如图4所示的S406~S417,本实施例相关解释可以参见前述实施例,在此不再赘述。
在图库中查看图片的应用场景下,尤其是图库中存在大量未进行实体识别的图片的情况下。示例性的,在手机升级操作系统后,由于旧系统不具备图片实体识别的功能,而新系统具备图片实体识别的功能,会导致在操作系统升级后,手机图库中存在大量未进行实体识别的图片。考虑到对图片进行实体识别的手机功耗问题,本申请实施例具体实现时在手机充电时批量进行图片实体识别操作,以降低手机在用户使用时的功耗。
如图12所示为各模块的交互流程示意图,参照图12,本申请实施例提供的图片实体识别的流程,具体包括:
S1201,感知服务接收到充电操作。
充电操作,可以是将手机通过充电器连接到市电进行充电的操作,也可以是通过数据线连接到电子设备(如充电宝或其他终端)进行充电的操作。
S1202,感知服务向实体识别应用发送实体识别指令。
感知服务响应于用户的充电操作,向实体识别应用发送实体识别指令。在本实施例中,实体识别指令可以用于指示实体识别应用进行批量图片实体识别。
S1203,实体识别应用依次针对图库中的实体未识别图片进行图片文本识别及码识别,并向计算引擎应用发送文本识别及码识别结果。
S1204,计算引擎应用根据文本识别结果及码识别结果进行实体识别,并向实体识别应用发送实体识别结果。
S1205,实体识别应用在第一图层蒙版中添加实体识别结果查看图标,并展示第一图层蒙版。
S1206,感知服务接收到停止充电操作。
S1207,感知服务向实体识别应用发送发送实体识别停止指令。
感知服务响应于用户的停止充电操作,向实体识别应用发送实体识别停止指令。在本实施例中,实体识别停止指令可以用于指示实体识别应用停止进行图片实体识别。
在本实施例中,实体未识别图片,指的是未进行过实体识别的图片,不包括无法进行实体识别,或者是实体识别结果为空的图片。
可选的,感知服务可以在确定图片来源后,基于与截图操作对应的应用信息相关的第一识别条件,以及与相机拍摄分类结果相关的第三识别条件,确定无法进行实体识别的图片,并将这些图片进行标识。可选的,实体识别应用还可以,根据计算引擎应用发送的实体识别结果是否为空,给实体识别结果为空的图片进行标识。
实体识别应用结合图库中图片的标识,依次获取一张实体未识别图片,并结合计算引擎应用完成对该实体未识别图片的实体识别,直至完成对图库中所有实体未识别图片的实体识别,或者是直至停止给手机充电。
参照图13的手机充电应用场景,图13(1)示例性地示出了手机图库中存在的一张实体未识别图片(该图片中存在可识别的实体,但其显示界面上不存在实体识别结果查看图标),经过如图13(2)所示的充电操作,该实体未识别图片的显示界面变化可以参照图13(3)。此时,如图13(3)所示,在该图片显示界面上显示了实体识别结果查看图标1301,意味着该图片的实体识别已完成,可供用户查看实体识别结果。
在手机充电的应用场景下,若用户操作手机,并发起实体识别触发操作,此时信息推荐方法的流程,可以继续参见前述实施例,在此不再赘述,与针对大量实体未识别图片批量进行实体识别的流程可以并行。这样,本实施例结合了使用图片实体识别的实时方式和非实时方式,实现了对手机功耗的有效控制。
场景二
在本场景中,考虑到在直板手机界面上可复制文本中可能存在用户感兴趣实体,以此为例对本申请实施例具体实现方式进行详细说明。
如图14所示为各模块的交互流程示意图,参照图14,本申请实施例提供的信息推荐方法的流程,具体包括:
S1401,感知服务接收到复制文本到剪切板的操作。
S1402,感知服务向实体识别应用发送实体识别指令。
在本实施例中,实体识别指令可以用于指示实体识别应用针对用户复制的文本进行实体识别。可选的,用户复制的文本中可以包括一种或多种字符。
S1403,实体识别应用向计算引擎应用发送复制的文本。
在本实施例中,实体识别应用无需重复对用户复制文本进行文本识别,可以直接获取用户复制到剪切板上的文本,并发送至计算引擎应用进行实体识别。
S1404,计算引擎应用根据接收到的文本进行实体识别,确定与实体识别结果对应的推荐应用。
可选的,计算引擎应用根据接收到的文本进行实体识别,判断接收到的文本中是否只包括一种实体,如只包括地址实体,或至包括电话号码实体等,若是,则根据该实体类型进行关联应用的推荐,并将实体识别结果以及与实体识别结果匹配的关联应用发送至实体识别应用。如果计算引擎应用识别出接收到的文本中包括多种实体,如既包括地址实体,又包括电话号码实体,计算引擎应用可以不执行关联应用的推荐操作,同时,还可以将指示“无法识别到单一实体”的实体识别结果发送至实体识别应用。
示例性的,计算引擎应用如果识别到接收到的文本中仅包括一个语种的字符,如英文字符,则可以将该文本识别为待翻译实体,进而推荐翻译应用。
在本步骤中,计算引擎应用初步推荐与实体识别结果对应的关联应用。
S1405,计算引擎应用向推荐应用SDK发送指示信息。
其中,指示信息用于指示推荐应用SDK对复制的文本进行判断,判断是否与本应用对应。可选的,指示信息中包括但不限于复制的文本。
当计算引擎应用推荐的关联应用有多个时,计算引擎应用分别向每个推荐应用的SDK发送指示信息,以使每个推荐应用SDK分别进行推荐应用确认。
S1406,推荐应用SDK在判断出复制的文本与本应用对应时,确定本应用为推荐应用。
示例性的,推荐应用SDK对接收到的复制文本进行语义分析,判断其是否为与本应用对应的文本。若是,则推荐应用SDK确认本应用为推荐应用。
在本步骤中,推荐应用SDK对计算引擎应用初步推荐的关联应用进行确认,进一步保证了推荐应用的准确性。
S1407,推荐应用SDK向计算引擎应用发送指示信息。
其中,指示信息用于向计算引擎应用指示本应用是否确认为推荐应用。示例性的,指示信息中包括但不限于确认标识和否认标识。
接收到计算引擎应用发送的指示信息的每个推荐应用SDK,均会向计算引擎应用反馈指示信息,以向计算应用应用指示本应用是否为推荐应用。
S1408,计算引擎应用根据推荐应用SDK发送的指示信息,向实体识别应用发送实体识别结果及推荐应用信息。
若推荐应用SDK发送的指示信息指示其为推荐应用,则计算引擎应用将该推荐应用最终确认为推荐应用;若推荐应用SDK发送的指示信息指示其非推荐应用,则计算引擎应用不再推荐该应用。
计算引擎应用根据各推荐应用SDK发送的指示信息,生成最终的推荐应用信息,发送给实体识别应用。
这样,通过计算引擎应用的初步推荐,以及各推荐应用SDK的二次确认,极大地保证了推荐应用的准确性。
S1409,实体识别应用显示推荐应用列表卡片或推荐应用图标。
可选的,当计算引擎应用发送的推荐应用的数量为一个时,实体识别应用可以直接获取该推荐应用的图标,并显示该图标,以供用户点击开启应用;当计算引擎应用发送的推荐应用的数量为多个时,实体识别应用可以在获取这多个推荐应用的图标后,将这些图标以列表卡片的形式进行显示,以供用户从中选择一个推荐用点击开启。
在本场景下,由于实体识别是通过用户复制文本到剪切板的操作触发的,故计算引擎应用无需再重复推荐复制服务。相应的,实体识别应用显示推荐应用列表卡片中也就不会包括复制服务,以此避免了重复推荐服务的问题。
S1410,应用运行管理服务响应于针对推荐应用的点击操作,向推荐应用发送应用开启指令。
S1411,推荐应用开启,并向应用运行管理服务发送应用已开启的指示信息。
S1412,应用运行管理服务向实体识别应用发送推荐应用已开启的指示信息。
S1413,实体识别应用向推荐应用发送识别到的实体内容。
S1414,推荐应用根据实体内容实现相应的应用功能。
关于本实施例中未尽详细解释之处可以参见前述实施例,在此不再赘述。
下面结合如图15和图16所示的应用场景,也即用户复制文本到剪切板的应用场景,对本申请实施例提供的信息推荐方法进行解释说明。
参照图15(1),用户在手机浏览界面中选中部分字符后,在手机浏览界面中出现包括“复制”选项和“搜索”选项的图标1501。用户点击图标1501中的“复制”选项,手机的感知服务接收到用户复制文本到剪切板的操作,响应于用户的操作行为,手机的感知服务向实体识别应用发送实体识别指令,以使实体识别应用结合计算引擎应用完成对用户复制文本的实体识别操作,以及推荐相关服务的操作。在本应用场景中,计算引擎应用识别用户复制文本属于待翻译实体,故推荐翻译应用。如图15(2)所示,实体识别应用为翻译应用图标1502,以供用户点击开启翻译应用。在图15(2)中,示例性的,翻译应用图标1502以悬浮球的形式显示。若翻译应用符合用户意图,用户则可以点击翻译应用图标1502,以实现翻译用户复制的文本。在图15(3)中,翻译应用在窗口1503中打开,且用户复制文本显示在原文编辑处,并实现了原文到译文的翻译功能。
参照图16(1),用户在手机浏览界面中选中部分字符后,在手机浏览界面中出现包括“复制”选项和“搜索”选项的图标1601。用户点击图标1601中的“复制”选项,手机的感知服务接收到用户复制文本到剪切板的操作,响应于用户的操作行为,手机的感知服务向实体识别应用发送实体识别指令,以使实体识别应用结合计算引擎应用完成对用户复制文本的实体识别操作,以及推荐相关服务的操作。在本应用场景中,计算引擎应用识别用户复制文本属于电话号码实体,故推荐与电话号码实体匹配的多个推荐应用。如图16(2)所示,实体识别应用可以为用户展示推荐应用列表卡片1602。在推荐应用列表卡片1602中,每行显示一个推荐应用的图标以及应用功能简述,这些推荐应用可以按照优 先级由高到低的顺序排列。可选的,如图16(2)所示,在推荐应用列表卡片1602中,首行16021还可以显示与用户点击的实体标注匹配的实体内容,如识别到的电话号码,以供用户核实是否为其感兴趣信息。示例性的,推荐应用列表卡片首行中,实体识别结果的左侧显示一个图标,用于指示本行为实体识别结果。需要注意的是,虽然复制服务也是与电话号码实体匹配的推荐服务,但在本应用场景下,复制服务无需重复推荐,故推荐应用列表卡片1602不包含复制服务。当推荐应用列表卡片1602中,包括符合用户意图的推荐应用时,如“发送消息”,用户可以点击该推荐应用的图标16022,以实现发送信息至复制的电话号码,参照图16(3)。在图16(3)中,信息应用已开启,且信息应用显示界面1603中收信人电话号码编辑处16031填充的电话号码即为推荐应用列表卡片1602的首行16021中显示的电话号码,也即用户复制的文本。
本申请实施例,在用户复制文本到剪切板的动作触发下,手机对复制文本进行实体识别,并给予实体识别结果推荐符合用户意图的相关服务,以供用户选择,由此提高了用户的使用体验。
在前述实施例的基础上,还可以使用折叠屏手机实现信息推荐方法的流程,在此不再赘述。在采用折叠屏手机的应用场景下,由于折叠屏手机的界面展示形式多种多样,故可以将不同的推荐服务的显示界面在不同的显示区域中显示。
如图17所示为推荐应用显示区域的示意图。参照图17(1),当折叠屏手机处于折叠态(或称直板态)时,推荐应用可以直接在显示区域1701中开启,可以参见前述实施例中的示例,在此不再赘述。参照图17(2),当折叠屏手机处于展开态时,推荐应用可以直接在显示区域1702中开启,与图17(1)的显示情况类似,只是显示区域大小的问题,在此不再赘述。
当折叠屏手机处于展开态时,折叠屏手机的左右显示区域可以分别显示不同的应用,推荐应用的显示界面可以显示在左显示区域或右显示区域中。参照图17(3),示例性的,若原应用(用户执行实体识别触发操作的应用)在显示区域1703中开启,推荐应用可以在显示区域1704中开启,以使用户可以同时查看到这两个应用的显示界面。参照如图18所示的应用场景,折叠屏手机处于展开态时,折叠屏的左右显示区域分别显示不同的应用。如图18(1)所示,浏览应用的界面显示在显示区域1801中,聊天应用的界面显示在显示区域1802中。用户在浏览应用中截图,触发对截图图片进行实体识别,并基于用户选择的实体进行服务推荐,用户选择“在地图中打开”这一地图服务,开启地图应用。如图18(2)所示,浏览应用的截图界面继续在显示区域1801中显示,而地图应用作为推荐应用,可以在显示区域1802中显示。
当折叠屏手机处于展开态时,折叠屏手机的左右显示区域可以分别显示不同的应用,推荐应用还可以在半屏卡片窗口中开启。可选的,当推荐应用不属于独立APP时,可以采用在半屏卡片窗口中开启的方式。参照图17(4),示例性的,原应用1(用户执行实体识别触发操作的应用)的界面显示在显示区域1705中,推荐应用的界面可以显示在显示区域1705上的窗口(半屏卡片窗口)1706中开启。可选的,若原应用2为用户执行实体识别触发操作的应用,推荐应用也可以在窗口(半屏卡片窗口)1706中开启,对此不 做限定。参照如图19所示的应用场景,折叠屏手机处于展开态时,折叠屏的左右显示区域分别显示不同的应用。如图19(1)所示,浏览应用的界面显示在显示区域1901中,备忘录应用的界面显示在区域1902中。用户在浏览应用中复制文本至剪切板,触发对复制的文本进行实体识别,并进行服务推荐,用户选择推荐的翻译应用,开启翻译应用。如图19(2)所示,浏览应用的界面继续在显示区域1901中显示,备忘录应用的界面继续在显示区域1902中显示,而翻译应用作为推荐应用,可以在窗口1903中显示。
当折叠屏手机处于展开态时,折叠屏的左右显示区域分别显示不同的应用,推荐应用还可以在悬浮窗口中开启。参照图17(5),示例性的,若原应用3(用户执行实体识别触发操作的应用)在悬浮窗口1707中开启,推荐应用可以在新创建的悬浮窗口1708中开启。示例性的,悬浮窗口1707和悬浮窗口1708可以分别在折叠屏手机的左右显示区域中悬浮,如图17(5)所示。示例性的,悬浮窗口1708还可以与悬浮窗口1707同时悬浮在折叠屏手机的左显示区域(或右显示区域)中,悬浮窗口1708部分覆盖于悬浮窗口1707上。参照如图20所示的应用场景,折叠屏手机处于展开态时,折叠屏的左右显示区域以及悬浮窗口中分别显示不同的应用。如图20(1)所示,浏览应用的界面显示在显示区域2001中,备忘录应用的界面显示在区域2002中,图库应用的界面显示在悬浮窗口2003中。其中,悬浮窗口2003悬浮于显示区域2001上。用户在图库中查看图片,触发对图片进行实体识别,并进行服务推荐,用户选择推荐的在某APP中打开。如图20(2)所示,浏览应用的界面继续在显示区域2001中显示,备忘录应用的界面继续在显示区域2002中显示,图库应用的界面继续在悬浮窗口2003中显示,而某APP作为推荐应用,在新建的悬浮窗口2004中显示。其中,悬浮窗口2004悬浮于窗口2002上。
示例性的,悬浮窗口2004也可以同悬浮窗口2003一起悬浮于显示区域2001上。此时,悬浮窗口2004也可以悬浮于悬浮窗口2003上,覆盖部分悬浮窗口2003。其中,悬浮窗口的尺寸可以调整,本申请实施例对悬浮窗口的尺寸不做限定。
前述场景一中,在采用折叠屏手机,且折叠屏的左右显示区域分别显示不同应用的应用场景下,用户在其中一个显示区域显示的应用中发起实体识别触发操作,触发实体识别应用和计算引擎服务完成图片实体识别。用户点击实体识别结果查看图标,响应于用户的操作行为,实体识别应用展示实体识别应用。此时,用户可以长按并拖动实体标注,与该实体标注对应的实体内容在界面中会跟随移动。可选的,用户可以将该实体内容直接拖动至另一个显示区域显示应用中。
如图21a~图21c所示为应用场景示意图,对用户长按拖动实体标识以实现搬移实体内容的流程进行详细说明。如图21a所示,折叠屏的左右显示区域分别显示图库界面和备忘录界面。用户在图库中查看图片,触发对图片进行实体识别,显示如图21所示的实体标注结果。参照图21a和图21b,用户长按实体标注2101并拖动时,与实体标注2101对应的实体内容2102随着用户手指的移动而移动。若用户长按实体标注2101并将拖动至备忘录界面中的可编辑位置,与实体标注2101对应的实体内容2102则会直接显示于该可编辑位置处。
这样,在折叠屏手机的左右显示区域分别显示不同应用的应用场景下,在其中一个 显示区域中,用户长按并拖动实体标注,与该实体标注对应的实体内容在界面中会跟随移动,用户可以将该实体内容直接拖动至另一个显示区域显示的应用中。用户操作简单便捷,提升了用户体验。
本实施例还提供一种计算机存储介质,该计算机存储介质中存储有计算机指令,当该计算机指令在电子设备上运行时,使得电子设备执行上述相关方法步骤实现上述实施例中的信息推荐方法。
本实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中的信息推荐方法。
另外,本申请的实施例还提供一种装置,这个装置具体可以是芯片,组件或模块,该装置可包括相连的处理器和存储器;其中,存储器用于存储计算机执行指令,当装置运行时,处理器可执行存储器存储的计算机执行指令,以使芯片执行上述各方法实施例中的信息推荐方法。
其中,本实施例提供的电子设备(如折叠屏手机)、计算机存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
通过以上实施方式的描述,所属领域的技术人员可以了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (39)

  1. 一种信息推荐方法,其特征在于,包括:
    响应于接收到的第一操作,显示第一界面;其中,在所述第一界面中显示目标图像以及第一图标,所述第一图标用于指示在所述目标图像中识别到预设类型的关键信息;所述第一操作包括:截图操作,在图库中查看图像的操作;
    响应于对所述第一图标的第二操作,显示第二界面;其中,在所述第二界面中显示所述目标图像,以及对所述关键信息的标注;
    响应于对其中一个标注的第三操作,显示第三界面;其中,在所述第三界面中显示一个或多个应用程序的标识,所述应用程序是根据与所述其中一个标注对应的关键信息的信息类型推荐的;
    响应于对其中一个应用程序的标识的第四操作,显示第四界面;其中,在所述第四界面中显示所述其中一个应用程序的显示界面,所述显示界面的内容与所述其中一个标注对应的关键信息相关。
  2. 根据权利要求1所述的方法,其特征在于,所述响应于接收到的第一操作,显示第一界面,包括:
    响应于接收到的第一操作,对与所述第一操作对应的目标图像进行识别;
    在识别完成且所述目标图像中存在预设类型的关键信息时,显示所述第一界面。
  3. 根据权利要求2所述的方法,其特征在于,所述响应于接收到的第一操作,对与所述第一操作对应的目标图像进行识别,包括:
    响应于接收到的第一操作,在所述目标图像满足实时识别条件时,对所述目标图像进行识别。
  4. 根据权利要求3所述的方法,其特征在于,所述第一操作为截图操作时,所述目标图像满足实时识别条件,包括:
    执行所述截图操作时界面显示的应用程序满足第一识别条件,且用户截图分享习惯满足第二识别条件时,所述目标图像满足实时识别条件;
    其中,所述第一识别条件用于指示存在识别需求的应用程序,所述第二识别条件用于指示符合实时进行识别的用户操作习惯。
  5. 根据权利要求3所述的方法,其特征在于,所述第一操作为在图库中查看图像的操作时,所述目标图像满足实时识别条件,包括:
    在所述目标图像为相机拍摄图像的情况下,所述目标图像的拍摄属性满足第三识别条件时,所述目标图像满足实时识别条件;
    在所述目标图像为截图图像的情况下,执行截图操作时界面显示的应用程序满足第一识别条件时,所述目标图像满足实时识别条件;
    其中,所述第一识别条件用于指示存在识别需求的应用程序,所述第三识别条件用于指示存在图像识别需求的相机拍摄模式。
  6. 根据权利要求1所述的方法,其特征在于,在所述响应于接收到的第一操作,显示第一界面之前,还包括:
    显示第一相机拍摄界面;
    响应于接收到的拍照操作,将拍照获取到的目标图像存储于图库中,并对所述目标图像进行识别;
    在识别完成且所述目标图像中存在预设类型的关键信息时,所述响应于接收到的第一操作,显示第一界面,包括:
    响应于在图库中查看所述目标图像的操作,显示所述第一界面。
  7. 根据权利要求6所述的方法,其特征在于,所述对所述目标图像进行识别,包括:
    在所述目标图像的拍摄属性满足第三识别条件时,对所述目标图像进行识别;其中,所述第三识别条件用于指示存在图像识别需求的相机拍摄模式。
  8. 根据权利要求6所述的方法,其特征在于,在所述对所述目标图像进行识别之后,还包括:
    在识别完成且所述目标图像中存在预设类型的关键信息时,显示第二相机拍摄界面;
    其中,所述第二相机拍摄界面中还显示第二图标,所述第二图标用于指示对所述目标图像已识别完成,且在所述目标图像中识别到预设类型的关键信息。
  9. 根据权利要求1所述的方法,其特征在于,还包括:
    响应于接收到的充电操作,若图库中存在未进行识别的图像,则依次对所述未进行识别的图像进行识别;
    响应于接收到的充电停止操作,若所述图库中存在未进行识别的图像,则停止对所述未进行识别的图像进行识别的操作。
  10. 根据权利要求1所述的方法,其特征在于,在所述第三界面中显示一个应用程序的标识时,所述应用程序的标识以悬浮球的形式显示;
    在所述第三界面中显示多个应用程序的标识时,所述多个应用程序的标识以列表的形式显示;所述列表中还显示与所述其中一个标注对应的关键信息的内容。
  11. 根据权利要求1所述的方法,其特征在于,所述方法应用于折叠屏手机中,所述折叠屏呈展开态,包括第一显示区域和第二显示区域;
    所述响应于接收到的第一操作,显示第一界面,包括:
    响应于接收到的第一操作,在所述第一显示区域中显示第一界面;
    所述响应于对所述第一图标的第二操作,显示第二界面,包括:
    响应于对所述第一图标的第二操作,在所述第一显示区域中显示第二界面;
    所述响应于对其中一个标注的第三操作,显示第三界面,包括:
    响应于对其中一个标注的第三操作,在所述第一显示区域中显示第三界面;
    所述响应于对其中一个应用程序的标识的第四操作,显示第四界面,包括:
    响应于对其中一个应用程序的标识的第四操作,在所述第二显示区域中显示第四界面;或者,
    响应于对其中一个应用程序的标识的第四操作,在所述第一显示区域上的半屏卡片窗口中显示第四界面。
  12. 根据权利要求1所述的方法,其特征在于,
    所述响应于接收到的第一操作,显示第一界面,包括:
    响应于接收到的第一操作,在第一悬浮窗口中显示第一界面;
    所述响应于对所述第一图标的第二操作,显示第二界面,包括:
    响应于对所述第一图标的第二操作,在所述第一悬浮窗口中显示第二界面;
    所述响应于对其中一个标注的第三操作,显示第三界面,包括:
    响应于对其中一个标注的第三操作,在所述第一悬浮窗口中显示第三界面;
    所述响应于对其中一个应用程序的标识的第四操作,显示第四界面,包括:
    响应于对其中一个应用程序的标识的第四操作,在第二悬浮窗口中显示第四界面。
  13. 根据权利要求1所述的方法,其特征在于,所述方法应用于折叠屏手机中,所述折叠屏呈展开态,包括第一显示区域和第二显示区域,所述第一显示区域中显示第一应用的显示界面,所述第二显示区域中显示第二应用的显示界面;
    所述响应于接收到的第一操作,显示第一界面,包括:
    响应于接收到的对所述第一应用的第一操作,在所述第一显示区域中显示第一界面;
    所述响应于对所述第一图标的第二操作,显示第二界面,包括:
    响应于对所述第一图标的第二操作,在所述第一显示区域中显示第二界面;
    所述方法还包括:
    响应于对其中一个标注的长按操作及拖动操作,在所述第一显示区域上显示第三悬浮窗口,所述第三悬浮窗口移动至所述第二显示区域上;其中,所述拖动操作由所述第一显示区域指向所述第二显示区域,所述第三悬浮窗口中显示与所述其中一个标注对应的关键信息内容;
    响应于所述长按操作及拖动操作停止,在所述第二应用的显示界面中对应的信息编辑处显示所述关键信息内容。
  14. 根据权利要求1所述的方法,其特征在于,当与所述其中一个标注对应的关键信息的信息类型为码类时,所述显示界面的内容与所述其中一个标注对应的关键信息相关,包括:
    所述显示界面中显示与所述关键信息对应的链接界面;
    当与所述其中一个标注对应的关键信息的信息类型为字符类时,所述显示界面的内容与所述其中一个标注对应的关键信息相关,包括:
    在所述显示界面中对应的信息编辑处,显示与所述其中一个标注对应的关键信息的内容。
  15. 根据权利要求1所述的方法,其特征在于,所述应用程序是根据与所述其中一个标注对应的关键信息的信息类型,以及与所述信息类型对应的默认推荐规则推荐的;
    或者,所述应用程序是根据与所述其中一个标注对应的关键信息的信息类型,以及用户习惯推荐的;
    或者,所述应用程序是根据与所述其中一个标注对应的关键信息的信息类型,用户操作以及用户画像推荐的。
  16. 一种信息推荐方法,其特征在于,应用于折叠屏手机,所述折叠屏呈展开态,包括第一显示区域和第二显示区域;所述方法包括:
    在所述第一显示区域中显示第一界面;所述第一界面中包括可复制的文本;
    响应于在所述第一界面上接收到的复制操作,在所述第一显示区域中显示第二界面;其中,在所述第二界面中显示一个或多个应用程序的标识,所述应用程序是根据复制的文本对应的信息类型推荐的;
    响应于对其中一个应用程序的标识的点击操作,在所述第一显示区域的半屏卡片窗口中显示第三界面,或者在所述第二显示区域中显示所述第三界面;其中,在所述第三界面中显示所述其中一个应用程序的显示界面,在所述显示界面中对应的信息编辑处显示所述复制的文本。
  17. 根据权利要求16所述的方法,其特征在于,在所述第二界面中显示一个应用程序的标识时,所述应用程序的标识以悬浮球的形式显示;
    在所述第二界面中显示多个应用程序的标识时,所述多个应用程序的标识以列表的形式显示;所述列表中还显示与所述其中一个标注对应的关键信息的内容。
  18. 根据权利要求16所述的方法,其特征在于,所述响应于在所述第一界面上接收到的复制操作,在所述第一显示区域中显示第二界面,包括:
    响应于在所述第一界面上接收到的复制操作,对复制的文本进行识别;
    在所述复制的文本属于预设类型的关键信息时,根据所述复制的文本所属的信息类型推荐一个或多个待定应用;
    将所述复制的文本分别发送至各个所述待定应用程序的软件工具开发包SDK,并接收所述待定应用的SDK反馈的确认信息;其中,所述确认信息用于指示推荐是否正确;
    根据各个所述待定应用的SDK反馈的确认信息,在所述一个或多个待定应用中筛选出待显示的应用程序,在所述第一显示区域中显示第二界面。
  19. 根据权利要求16所述的方法,其特征在于,所述应用程序是根据复制的文本对应的信息类型,以及与所述信息类型对应的默认推荐规则推荐的;
    或者,所述应用程序是根据复制的文本对应的信息类型,以及用户习惯推荐的;
    或者,所述应用程序是根据复制的文本对应的信息类型,用户操作以及用户画像推荐的。
  20. 一种电子设备,其特征在于,包括:
    一个或多个处理器;
    存储器;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序存储在所述存储器上,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    响应于接收到的第一操作,显示第一界面;其中,在所述第一界面中显示目标图像以及第一图标,所述第一图标用于指示在所述目标图像中识别到预设类型的关键信息;所述第一操作包括:截图操作,在图库中查看图像的操作;
    响应于对所述第一图标的第二操作,显示第二界面;其中,在所述第二界面中显示所述目标图像,以及对所述关键信息的标注;
    响应于对其中一个标注的第三操作,显示第三界面;其中,在所述第三界面中显示一个或多个应用程序的标识,所述应用程序是根据与所述其中一个标注对应的关键信息的信息类型推荐的;
    响应于对其中一个应用程序的标识的第四操作,显示第四界面;其中,在所述第四界面中显示所述其中一个应用程序的显示界面,所述显示界面的内容与所述其中一个标注对应的关键信息相关。
  21. 根据权利要求20所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    响应于接收到的第一操作,对与所述第一操作对应的目标图像进行识别;
    在识别完成且所述目标图像中存在预设类型的关键信息时,显示所述第一界面。
  22. 根据权利要求21所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    响应于接收到的第一操作,在所述目标图像满足实时识别条件时,对所述目标图像进行识别。
  23. 根据权利要求22所述的电子设备,其特征在于,所述第一操作为截图操作时,执行所述截图操作时界面显示的应用程序满足第一识别条件,且用户截图分享习惯满足第二识别条件时,所述目标图像满足实时识别条件;
    其中,所述第一识别条件用于指示存在识别需求的应用程序,所述第二识别条件用于指示符合实时进行识别的用户操作习惯。
  24. 根据权利要求22所述的电子设备,其特征在于,所述第一操作为在图库中查看图像的操作时,在所述目标图像为相机拍摄图像的情况下,所述目标图像的拍摄属性满足第三识别条件时,所述目标图像满足实时识别条件;
    所述第一操作为在图库中查看图像的操作时,在所述目标图像为截图图像的情况下,执行截图操作时界面显示的应用程序满足第一识别条件时,所述目标图像满足实时识别条件;
    其中,所述第一识别条件用于指示存在识别需求的应用程序,所述第三识别条件用于指示存在图像识别需求的相机拍摄模式。
  25. 根据权利要求20所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备还执行以下步骤:
    显示第一相机拍摄界面;
    响应于接收到的拍照操作,将拍照获取到的目标图像存储于图库中,并对所述目标图像进行识别;
    响应于在图库中查看所述目标图像的操作,显示所述第一界面。
  26. 根据权利要求25所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    在所述目标图像的拍摄属性满足第三识别条件时,对所述目标图像进行识别;其中,所述第三识别条件用于指示存在图像识别需求的相机拍摄模式。
  27. 根据权利要求25所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备还执行以下步骤:
    在识别完成且所述目标图像中存在预设类型的关键信息时,显示第二相机拍摄界面;
    其中,所述第二相机拍摄界面中还显示第二图标,所述第二图标用于指示对所述目标图像已识别完成,且在所述目标图像中识别到预设类型的关键信息。
  28. 根据权利要求20所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备还执行以下步骤:
    响应于接收到的充电操作,若图库中存在未进行识别的图像,则依次对所述未进行识别的图像进行识别;
    响应于接收到的充电停止操作,若所述图库中存在未进行识别的图像,则停止对所述未进行识别的图像进行识别的操作。
  29. 根据权利要求20所述的电子设备,其特征在于,在所述第三界面中显示一个应用程序的标识时,所述应用程序的标识以悬浮球的形式显示;
    在所述第三界面中显示多个应用程序的标识时,所述多个应用程序的标识以列表的 形式显示;所述列表中还显示与所述其中一个标注对应的关键信息的内容。
  30. 根据权利要求20所述的电子设备,其特征在于,所述电子设备为折叠屏手机,所述折叠屏呈展开态,包括第一显示区域和第二显示区域;
    当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    响应于接收到的第一操作,在所述第一显示区域中显示第一界面;
    响应于对所述第一图标的第二操作,在所述第一显示区域中显示第二界面;
    响应于对其中一个标注的第三操作,在所述第一显示区域中显示第三界面;
    响应于对其中一个应用程序的标识的第四操作,在所述第二显示区域中显示第四界面;或者,响应于对其中一个应用程序的标识的第四操作,在所述第一显示区域上的半屏卡片窗口中显示第四界面。
  31. 根据权利要求20所述的电子设备,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    响应于接收到的第一操作,在第一悬浮窗口中显示第一界面;
    响应于对所述第一图标的第二操作,在所述第一悬浮窗口中显示第二界面;
    响应于对其中一个标注的第三操作,在所述第一悬浮窗口中显示第三界面;
    响应于对其中一个应用程序的标识的第四操作,在第二悬浮窗口中显示第四界面。
  32. 根据权利要求20所述的电子设备,其特征在于,所述电子设备为折叠屏手机,所述折叠屏呈展开态,包括第一显示区域和第二显示区域;所述第一显示区域中显示第一应用的显示界面,所述第二显示区域中显示第二应用的显示界面;
    当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备执行以下步骤:
    响应于接收到的对所述第一应用的第一操作,在所述第一显示区域中显示第一界面;
    响应于对所述第一图标的第二操作,在所述第一显示区域中显示第二界面;
    当所述计算机程序被所述一个或多个处理器执行时,使得所述电子设备还执行以下步骤:
    响应于对其中一个标注的长按操作及拖动操作,在所述第一显示区域上显示第三悬浮窗口,所述第三悬浮窗口移动至所述第二显示区域上;其中,所述拖动操作由所述第一显示区域指向所述第二显示区域,所述第三悬浮窗口中显示与所述其中一个标注对应的关键信息内容;
    响应于所述长按操作及拖动操作停止,在所述第二应用的显示界面中对应的信息编辑处显示所述关键信息内容。
  33. 根据权利要求20所述的电子设备,其特征在于,当与所述其中一个标注对应的关键信息的信息类型为码类时,所述显示界面中显示与所述关键信息对应的链接界面;
    当与所述其中一个标注对应的关键信息的信息类型为字符类时,在所述显示界面中对应的信息编辑处,显示与所述其中一个标注对应的关键信息的内容。
  34. 根据权利要求20所述的电子设备,其特征在于,所述应用程序是根据与所述其中一个标注对应的关键信息的信息类型,以及与所述信息类型对应的默认推荐规则推荐的;
    或者,所述应用程序是根据与所述其中一个标注对应的关键信息的信息类型,以及用户习惯推荐的;
    或者,所述应用程序是根据与所述其中一个标注对应的关键信息的信息类型,用户操作以及用户画像推荐的。
  35. 一种折叠屏手机,其特征在于,所述折叠屏呈展开态,包括第一显示区域和第二显示区域;所述折叠屏手机包括:
    一个或多个处理器;
    存储器;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序存储在所述存储器上,当所述计算机程序被所述一个或多个处理器执行时,使得所述折叠屏手机执行以下步骤:
    在所述第一显示区域中显示第一界面;所述第一界面中包括可复制的文本;
    响应于在所述第一界面上接收到的复制操作,在所述第一显示区域中显示第二界面;其中,在所述第二界面中显示一个或多个应用程序的标识,所述应用程序是根据复制的文本对应的信息类型推荐的;
    响应于对其中一个应用程序的标识的点击操作,在所述第一显示区域的半屏卡片窗口中显示第三界面,或者在所述第二显示区域中显示所述第三界面;其中,在所述第三界面中显示所述其中一个应用程序的显示界面,在所述显示界面中对应的信息编辑处显示所述复制的文本。
  36. 根据权利要求35所述的折叠屏手机,其特征在于,在所述第二界面中显示一个应用程序的标识时,所述应用程序的标识以悬浮球的形式显示;
    在所述第二界面中显示多个应用程序的标识时,所述多个应用程序的标识以列表的形式显示;所述列表中还显示与所述其中一个标注对应的关键信息的内容。
  37. 根据权利要求35所述的折叠屏手机,其特征在于,当所述计算机程序被所述一个或多个处理器执行时,使得所述折叠屏手机执行以下步骤:
    响应于在所述第一界面上接收到的复制操作,对复制的文本进行识别;
    在所述复制的文本属于预设类型的关键信息时,根据所述复制的文本所属的信息类型推荐一个或多个待定应用;
    将所述复制的文本分别发送至各个所述待定应用程序的软件工具开发包SDK,并接收所述待定应用的SDK反馈的确认信息;其中,所述确认信息用于指示推荐是否正确;
    根据各个所述待定应用的SDK反馈的确认信息,在所述一个或多个待定应用中筛选出待显示的应用程序,在所述第一显示区域中显示第二界面。
  38. 根据权利要求35所述的折叠屏手机,其特征在于,所述应用程序是根据复制的文本对应的信息类型,以及与所述信息类型对应的默认推荐规则推荐的;
    或者,所述应用程序是根据复制的文本对应的信息类型,以及用户习惯推荐的;
    或者,所述应用程序是根据复制的文本对应的信息类型,用户操作以及用户画像推荐的。
  39. 一种计算机可读存储介质,包括计算机程序,其特征在于,当所述计算机程序在电子设备上运行时,使得所述电子设备执行如权利要求1-15中任意一项所述的信息推荐方法,或者使得所述电子设备执行如权利要求16-19中任意一项所述的信息推荐方法。
PCT/CN2022/115350 2021-09-24 2022-08-29 信息推荐方法及电子设备 WO2023045702A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111123937.4A CN115857737A (zh) 2021-09-24 2021-09-24 信息推荐方法及电子设备
CN202111123937.4 2021-09-24

Publications (1)

Publication Number Publication Date
WO2023045702A1 true WO2023045702A1 (zh) 2023-03-30

Family

ID=85652653

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/115350 WO2023045702A1 (zh) 2021-09-24 2022-08-29 信息推荐方法及电子设备

Country Status (2)

Country Link
CN (1) CN115857737A (zh)
WO (1) WO2023045702A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150310300A1 (en) * 2014-04-28 2015-10-29 Distiller, Llc System and method for multiple object recognition and personalized recommendations
CN106293359A (zh) * 2016-08-09 2017-01-04 深圳市金立通信设备有限公司 一种调用程序的方法及终端
CN108322806A (zh) * 2017-12-20 2018-07-24 青岛海信电器股份有限公司 智能电视及电视画面截图的图形用户界面的显示方法
CN111344671A (zh) * 2017-11-10 2020-06-26 三星电子株式会社 电子设备及其操作方法
CN111709816A (zh) * 2020-06-23 2020-09-25 中国平安财产保险股份有限公司 基于图像识别的服务推荐方法、装置、设备及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108055590B (zh) * 2017-12-20 2021-06-04 聚好看科技股份有限公司 电视画面截图的图形用户界面的显示方法
CN108470041B (zh) * 2018-02-12 2021-01-08 维沃移动通信有限公司 一种信息搜索方法及移动终端
CN114332887A (zh) * 2019-12-26 2022-04-12 腾讯科技(深圳)有限公司 一种图像处理方法、装置、计算机设备和存储介质
CN111400605A (zh) * 2020-04-26 2020-07-10 Oppo广东移动通信有限公司 基于眼球追踪的推荐方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150310300A1 (en) * 2014-04-28 2015-10-29 Distiller, Llc System and method for multiple object recognition and personalized recommendations
CN106293359A (zh) * 2016-08-09 2017-01-04 深圳市金立通信设备有限公司 一种调用程序的方法及终端
CN111344671A (zh) * 2017-11-10 2020-06-26 三星电子株式会社 电子设备及其操作方法
CN108322806A (zh) * 2017-12-20 2018-07-24 青岛海信电器股份有限公司 智能电视及电视画面截图的图形用户界面的显示方法
CN111709816A (zh) * 2020-06-23 2020-09-25 中国平安财产保险股份有限公司 基于图像识别的服务推荐方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN115857737A (zh) 2023-03-28

Similar Documents

Publication Publication Date Title
CN110781688B (zh) 机器翻译的方法和电子设备
CN111465918B (zh) 在预览界面中显示业务信息的方法及电子设备
CN111164983B (zh) 互联终端出借本地处理能力
CN111970401B (zh) 一种通话内容处理方法、电子设备和存储介质
CN111881315A (zh) 图像信息输入方法、电子设备及计算机可读存储介质
CN112130714B (zh) 可进行学习的关键词搜索方法和电子设备
US20230367464A1 (en) Multi-Application Interaction Method
US20220116497A1 (en) Image Classification Method and Electronic Device
CN112068907A (zh) 一种界面显示方法和电子设备
WO2023273543A1 (zh) 一种文件夹管理方法及装置
WO2022057889A1 (zh) 一种对应用程序的界面进行翻译的方法及相关设备
CN115131789A (zh) 文字识别方法、设备及存储介质
CN114117269B (zh) 备忘信息收藏方法、装置、电子设备及存储介质
WO2023045702A1 (zh) 信息推荐方法及电子设备
WO2021196980A1 (zh) 多屏交互方法、电子设备及计算机可读存储介质
WO2022002213A1 (zh) 翻译结果显示方法、装置及电子设备
WO2022135273A1 (zh) 一种调用其他设备能力的方法、电子设备和系统
WO2024067122A1 (zh) 一种窗口显示方法及电子设备
WO2023246666A1 (zh) 一种搜索方法及电子设备
WO2023045774A1 (zh) 显示方法及电子设备
WO2024037346A1 (zh) 页面管理方法及电子设备
WO2024078419A1 (zh) 语音交互方法、语音交互装置和电子设备
CN116069219B (zh) 一种获取日程信息的方法和电子设备
WO2023160455A1 (zh) 删除对象的方法及电子设备
EP4296840A1 (en) Method and apparatus for scrolling to capture screenshot

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22871754

Country of ref document: EP

Kind code of ref document: A1