CN111638846A - Image recognition method and device and electronic equipment - Google Patents

Image recognition method and device and electronic equipment Download PDF

Info

Publication number
CN111638846A
CN111638846A CN202010457926.9A CN202010457926A CN111638846A CN 111638846 A CN111638846 A CN 111638846A CN 202010457926 A CN202010457926 A CN 202010457926A CN 111638846 A CN111638846 A CN 111638846A
Authority
CN
China
Prior art keywords
information
control
intention
input
intention information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010457926.9A
Other languages
Chinese (zh)
Inventor
熊琦松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010457926.9A priority Critical patent/CN111638846A/en
Publication of CN111638846A publication Critical patent/CN111638846A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9535Search customisation based on user profiles and personalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
    • G06F16/9558Details of hyperlinks; Management of linked annotations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an image identification method, an image identification device and electronic equipment, and belongs to the technical field of communication. The problem that the image recognition cannot accurately provide information for the user can be solved. The method comprises the following steps: identifying a target image to obtain first information; displaying a first control, wherein the first control comprises M pieces of first intention information, each piece of first intention information is used for indicating a user intention, and M is a positive integer; receiving a first input of target intention information of the M pieces of first intention information; and responding to the first input, and displaying second information, wherein the second information is obtained by searching the first information according to the target intention information. The method can be applied to the scene of image recognition.

Description

Image recognition method and device and electronic equipment
Technical Field
The embodiment of the application relates to the technical field of communication, in particular to an image identification method and device and electronic equipment.
Background
With the development of electronic technology, an electronic device can identify and search the content of a picture and display the searched information related to the content. For example, a purchase link or encyclopedia introduction for the item in the picture is displayed.
At present, after the electronic device identifies the content of a picture, the electronic device may display information related to the content and the user intention according to the identified user intention corresponding to the category to which the content in the picture belongs. For example, if the electronic device recognizes that the content of the picture is a chair, the encyclopedia to which the chair belongs is classified as a furniture class, and the user intention corresponding to the furniture class is a purchase intention, the electronic device may display a purchase link of the chair after the content of the picture is recognized. For another example, if the electronic device identifies that the content of the picture is a tree, the encyclopedia to which the tree belongs is classified as a plant class, and the user intent corresponding to the plant class is a learning intent, the electronic device may display an encyclopedia introduction of the tree after identifying the content of the picture.
However, in the above process, the category to which the content in the picture belongs often corresponds to a preset user intention, and in an actual use process, there may be a difference between the user intention determined by the electronic device and the actual intention of the user, so that after the content of the picture is identified by the electronic device, the displayed related information cannot meet the user's requirement, and further, information cannot be accurately provided to the user.
Disclosure of Invention
The embodiment of the application aims to provide an image identification method, an image identification device and electronic equipment, which can solve the problem that information cannot be accurately provided for a user by image identification.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image recognition method, where the method includes: identifying a target image to obtain first information; displaying a first control, wherein the first control comprises M pieces of first intention information, each piece of first intention information is used for indicating a user intention, and M is a positive integer; receiving a first input of target intention information of the M pieces of first intention information; and responding to the first input, and displaying second information, wherein the second information is obtained by searching the first information according to the target intention information.
In a second aspect, an embodiment of the present application provides an image recognition apparatus, including: the device comprises a processing module, a display module and a receiving module. The processing module is used for identifying a target image to obtain first information; the display module is used for displaying a first control, the first control comprises M pieces of first intention information, each piece of first intention information is used for indicating a user intention, and M is a positive integer; a receiving module, configured to receive a first input of target intention information in the M pieces of first intention information; the display module is further used for responding to the first input received by the receiving module and displaying second information, and the second information is obtained by searching the first information according to the target intention information.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, and the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In the embodiment of the present application, after the image recognition apparatus recognizes the target image to obtain the first information and displays M pieces of first intention information (each piece of first intention information is used to indicate one user intention, and M is a positive integer) in the first control, in a case where the image recognition apparatus receives a first input of the target intention information among the M pieces of first intention information, the image recognition apparatus may display second information obtained by searching the first information according to the target intention information. According to the scheme, when the image recognition device searches the first information recognized from the target image, the image recognition device can search according to the target intention information selected by the user, and the target intention information can accurately reflect the actual requirement of the user for triggering the search of the first information, so that the image recognition device can better meet the actual requirement of the user for the second information obtained by searching the first information according to the target intention information selected by the user, and the image recognition device can more accurately provide the information meeting the actual requirement of the user for the user. The intelligent degree of image recognition of the electronic equipment is improved, and the use experience of a user is improved.
Drawings
Fig. 1 is a schematic diagram of an image recognition method according to an embodiment of the present disclosure;
fig. 2 is an operation diagram illustrating an electronic device displaying second information according to an embodiment of the present disclosure;
fig. 3 is a second schematic diagram of an image recognition method according to an embodiment of the present application;
fig. 4 is an operation diagram of an electronic device displaying a first control according to an embodiment of the present application;
fig. 5 is a third schematic diagram of an image recognition method according to an embodiment of the present application;
fig. 6 is a second schematic view illustrating an operation of an electronic device displaying a first control according to an embodiment of the present application;
fig. 7 is a schematic diagram illustrating an electronic device updating intention information ratio in a display control according to an embodiment of the present application;
fig. 8 is a fourth schematic diagram of an image recognition method according to an embodiment of the present application;
fig. 9 is a schematic diagram of an electronic device operating on a second message according to an embodiment of the present application;
fig. 10 is an operation diagram illustrating an operation of setting intention information in a control of an electronic device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of an image recognition apparatus according to an embodiment of the present disclosure;
fig. 12 is a second schematic structural diagram of an image recognition apparatus according to an embodiment of the present application;
fig. 13 is a hardware schematic diagram of an electronic device according to an embodiment of the present disclosure;
fig. 14 is a second hardware schematic diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image recognition method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
In the embodiment of the invention, the electronic equipment can identify the image to obtain the information in the image. And, the electronic device may search the identified information to obtain information related to the information. Specifically, after the electronic device identifies an image to obtain information (marked as first information) in the image, the electronic device may first display a first control including a plurality of intention information to a user, and then the user may select target intention information meeting the user requirement from the plurality of intention information in the first control according to the user requirement, and after the user selects the target intention information, the electronic device may search for second information obtained by searching the first information according to the target intention information, which may better meet the actual requirement of the user, so that the electronic device may more accurately provide the user with information meeting the actual requirement of the user. The intelligent degree of image recognition of the electronic equipment is improved, and the use experience of a user is improved.
In the embodiment of the present invention, the scene of the image recognized by the electronic device may be specifically any one of the following scenes:
in a first scenario, when a user does not know a certain object but wants to know the name or related information of the object, the user may touch the image recognition device to photograph or scan the object, so as to trigger the image recognition device to acquire an image of the object and recognize the image.
And in a second scenario, when the user needs to compare the difference between the physical store consignee and the website selling price of a certain commodity, the user can touch the image recognition device to photograph or scan the commodity so as to trigger the image recognition device to acquire the image of the commodity and recognize the image of the commodity.
And in a third scenario, when the user needs to listen to the song in a certain record, the user can touch the image recognition device to photograph or scan the cover page of the record so as to trigger the image recognition device to acquire the image of the record and recognize the image of the record.
As shown in fig. 1, an embodiment of the present application provides an image recognition method, which may include steps 101 to 104 described below.
Step 101, the electronic device identifies a target image to obtain first information.
It should be noted that, in the embodiment of the present application, the source of the target image is not particularly limited, and may be determined according to actual use requirements. Specifically, the target image may be an image captured by the electronic device, an image stored in the electronic device, or an image acquired by the electronic device from a server or other electronic devices.
Optionally, in this embodiment of the application, the manner of triggering the electronic device to recognize the image may be any of the following manners: in a first mode, the user triggers the electronic device to set up the target image through a preset input (e.g., a first target input described below). Specifically, the preset input may be an input for controlling the electronic device to capture and recognize a target image, or an input for controlling the electronic device to scan and recognize a target image (e.g., a scanning function, etc.), or an input for controlling the electronic device to recognize a target image stored in or downloaded from the electronic device. In a second mode, the electronic device may automatically trigger recognition of the target image. Specifically, under the condition that a camera of the electronic device is started, if the electronic device detects that an image acquired by the camera includes a preset image identifier (such as a face image identifier, a license plate image identifier, and the like), the electronic device acquires a target image and identifies the target image to obtain first information.
Optionally, in this embodiment of the application, the first information is obtained after the electronic device identifies the target image, and is used to represent content in the target image. Specifically, the first information may include at least one of: the name of the content in the target image, the encyclopedia classification of the content in the target image, keywords related to the content in the target image, and the like. For example, if the target image is a chair, the first information obtained after the electronic device recognizes the target image may include: "chair" (i.e., the name of the content in the target image), "furniture" (i.e., the encyclopedia in which the chair is located), "metal," "no handle," "three legs" (i.e., keywords related to the chair). The content of the first information may be determined according to actual use requirements, and the embodiment of the present application is not particularly limited.
And 102, displaying a first control by the electronic equipment.
The first control comprises M pieces of first intention information, each piece of first intention information is used for indicating a user intention, and M is a positive integer;
it should be noted that, in the embodiment of the present application, the execution sequence of the step 101 and the step 102 is not limited, and may be determined according to actual use requirements. Specifically, the electronic device may first execute step 101 and then execute step 102, or the electronic device may first execute step 102 and then execute step 101. It should be noted that, the following embodiment is exemplified by performing step 101 and then performing step 102.
Optionally, in this embodiment of the present application, step 101 and step 102 may be two execution results triggered by one input action. Specifically, before performing step 101, the electronic device may receive a first target input of the user, and the electronic device may identify a target image in response to the first target input, obtain first information, and display a first control. The first target input is used for triggering the electronic equipment to perform icon recognition.
Optionally, the shape and size of the first control are not specifically limited in this embodiment of the application, and may be determined according to actual use requirements. Specifically, the shape of the first control may be: circular, annular, rectangular, diamond-shaped, triangular, hexagonal, etc. The size of the first control is preferably suitable for screen display of the electronic equipment and convenient for users to use. It should be noted that, the following embodiments are exemplified by taking the first control as a circular control, which does not limit the present application.
It should be noted that, in the embodiment of the present application, each of the first intention information is used to indicate or correspond to a preset user intention. The user intention may be an intention manually set by the user or an intention stored in the electronic device system, and specifically, the user intention may be any of the following: buy, learn, travel, eat, drink, listen, etc. The first intention information is a visual identification representing a certain intention of the user, and the first intention information may receive an operation of the user (for example, an operation of the user selecting the first intention information).
Optionally, in this embodiment of the application, a display manner of the first control including the M pieces of first intention information may be any one of the following: in the mode 1, M sub-controls are displayed in the first control, and each sub-control corresponds to one intention information in the M first intention information. Mode 2, a table is displayed in the first control, and the table includes M cells, and each cell corresponds to one intention information of the M first intention information. In mode 3, the first control includes a drop-down list, where the list includes M items, and each item in the M items corresponds to one intention information in the M pieces of first intention information. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited. The following examples are given by way of illustration in the above mode 1, and are not intended to limit the present application.
Step 103, the electronic device receives a first input of target intention information in the M pieces of first intention information.
Optionally, in this embodiment of the application, the first input is an input of target intention information selected from the M pieces of first intention information. Specifically, the first input may be a touch input to target intention information in M pieces of first intention information displayed in the first control, where the touch input may be any one of: single-click input to the target intention information, double-click input to the target intention information, long-press input to the target intention information, drag input to the target intention information along a preset trajectory, and the like. The first input may also be a voice input to the electronic device, the content of the voice input being used to instruct the electronic device to select the target intention information from the M first intention information. For example, if the content of the voice input is "purchase", the electronic device may select information indicating a purchase intention (i.e., selected target intention information) from the M pieces of first intention information. The first input may also be an input to a physical key of the electronic device, which may be used to select the target intention information from the M first intention information. For example, a volume key of the electronic device may be used to select the first intention information, and the user may press the volume key a plurality of times until the target intention information is selected. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited.
It should be noted that, in this embodiment of the application, each piece of the M pieces of first intention information further corresponds to an algorithm related to the indicated intention, and after the user selects the target intention information from the pieces of first intention information, the electronic device may perform a search operation using the related algorithm corresponding to the target intention information. For example, assuming that the intention corresponding to the target intention is a "purchase" intention, the algorithm related to the purchase intention is an algorithm for filtering items related to the first information from the shopping website. If the user triggers the electronic device to recognize that the target image is a chair and the first information is "chair", "no handle", and then the target intent selected by the user is "purchase" intent, the electronic device may filter a purchase link for an item associated with the keyword "chair", "no handle" (i.e., the first information) from the shopping website according to an algorithm related to the "purchase" intent and may display the purchase link (i.e., the second information).
And 104, the electronic equipment responds to the first input and displays second information.
The second information is obtained by searching the first information according to the target intention information.
It should be noted that, in the embodiment of the present application, the interface displaying the second trust may be any one of the following: and displaying an interface of the first control, a desktop of the electronic equipment, a newly created interface of the electronic equipment, an interface loaded in an application program and the like. The interface for displaying the second information by the electronic device is not particularly limited, and can be determined according to actual use requirements. Further, in a case where the electronic device displays the second information, the electronic device may cancel displaying the first control.
Optionally, in this embodiment of the application, the second information is information that satisfies the user intention indicated by the target intention information, that is, the second information is information obtained by searching the first information according to the target intention information. That is, the second information is information that satisfies the user's needs. Specifically, the first information may be any one of the following: the method comprises the steps of obtaining shopping link information obtained by searching first information, obtaining information about encyclopedia introduction of a target object obtained by searching the first information, obtaining audio or video file resources obtained by searching the first information, obtaining a calculation result of the target object obtained by searching the first information and the like. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited.
In the embodiment of the present application, the search algorithm corresponding to the target intention information can perform a search based on intention information corresponding to the target intention information. That is, the electronic device may search the first information according to a search algorithm corresponding to the target intention information to obtain the second information.
For example, fig. 2 is an operation diagram of the electronic device displaying the second information. If the price of a chair needs to be queried in the furniture market, the user can use the electronic device 00 to capture an image 001 of the chair, as shown in fig. 2 (a), the electronic device 00 can display the image 001 of the chair in the image recognition interface 002, and display a control 003 (i.e., a first control), the control 003 including 6 regions, each region for indicating a user intention, respectively: encyclopedia, travel, listen, buy, eat, and drink, among others (i.e., the area shown by the ellipses in the figure). The user can click on the "purchase" area in the control 003, as shown in fig. 2 (b), and in response to the click input (i.e., the first input), the electronic device displays information 004 (i.e., the second information) related to the purchase of the chair, which is obtained by searching the chair according to the purchase demand, on the interface 001. If the user wants to view the details of the message 004, the user can long-press the message 004, as shown in (c) of fig. 2, and the electronic device 00 can display a purchase page 005 in response to the long-press input, the purchase page 005 including the details of the message 004 of the chair, and the user can perform a purchase operation on the page.
An embodiment of the present application provides an image recognition apparatus, after the image recognition apparatus recognizes a target image to obtain first information and displays M pieces of first intention information (each piece of first intention information is used to indicate a user intention, and M is a positive integer) in a first control, in a case where the image recognition apparatus receives a first input of target intention information among the M pieces of first intention information, the image recognition apparatus may display second information obtained by searching the first information according to the target intention information. According to the scheme, when the image recognition device searches the first information recognized from the target image, the image recognition device can search according to the target intention information selected by the user, and the target intention information can accurately reflect the actual requirement of the user for triggering the search of the first information, so that the image recognition device can better meet the actual requirement of the user for the second information obtained by searching the first information according to the target intention information selected by the user, and the image recognition device can more accurately provide the information meeting the actual requirement of the user for the user. The intelligent degree of image recognition of the electronic equipment is improved, and the use experience of a user is improved.
Optionally, in this embodiment of the application, the electronic device may set a plurality of controls according to a plurality of scenes, and the user may trigger the electronic device to select the first control from the plurality of controls. Specifically, the manner of triggering and selecting the first control by the electronic device may be any one of the following two manners:
first mode
Optionally, the electronic device may trigger the electronic device to update the second control to the first control through user input under the condition that the second control is displayed.
Optionally, with reference to fig. 1, as shown in fig. 3, before the step 102, the image recognition method provided in the embodiment of the present application further includes the following step 105 and step 106, and accordingly, the step 102 may be specifically implemented by the following step 102 a.
And 105, displaying a second control by the electronic equipment.
The second control comprises N pieces of second intention information, and N is a positive integer.
It should be noted that, in the embodiment of the present application, the size relationship between N and M is not specifically limited, and may be determined according to actual use requirements. Illustratively, N may be greater than M, N may be equal to M, and N may be less than M. In addition, the sizes of the second control and the first control are not particularly limited in the embodiment of the application, and can be determined according to actual use requirements.
In addition, the execution sequence of step 101, and step 105, step 106, and step 102a is not particularly limited in the embodiment of the present application. Specifically, the electronic device may first perform step 101, and then sequentially perform step 105, step 106, and step 102 a; the electronic device may also perform step 105, step 106, and step 102a before performing step 101. It should be noted that, the following embodiment is exemplified by performing step 101 first, and then performing step 105, step 106 and step 102a in sequence.
Optionally, in this embodiment of the application, a user may set different controls including intention information based on different usage scenarios, and according to a user requirement, the user may trigger the electronic device to switch the controls (i.e., update display in step 102 a). For example, a user may set a first control as a life class, and first intention information included in the first control includes: purchase, travel, learn, eat, drink, listen, etc.; the user may also set a second control as a work class, where the second intention information included in the second control includes: analysis, mathematical calculations, modeling, ranking, etc. The user can switch the first control and the second control according to the use scene.
It should be noted that, in the embodiment of the present application, for the relevant description about the second control and the N pieces of second intention information, reference may be made to the relevant description about the first control and the first intention information in step 102 and step 103, which is not described herein again.
Step 106, the electronic device receives a second input.
Optionally, in this embodiment of the application, the second input is used to trigger the electronic device to update and display the first control as the second control. Specifically, in a case that the electronic device displays an identifier for indicating a control (for example, a "life" identifier is displayed for indicating a first control, and a "work" identifier is displayed for indicating a second control), the second input may specifically be a touch input of the user to the identifier of the indication control, where the trigger input may be any of: single click input, double click input, long press input, drag input according to a preset trajectory, and the like. The second input may also be a voice input, the contents of which are used to instruct the electronic device to update the display of the first control as the second control. The second input may also be an input to a physical key of the electronic device (e.g., to a volume key) that may trigger the electronic device to update the display of the first control as the second control. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited.
And 102a, the electronic equipment responds to the second input and updates and displays the second control as the first control.
It should be noted that, in the embodiment of the present application, the electronic device updates and displays the second control as the first control, and specifically, the electronic device may cancel displaying the first control, and display the second control at the display position of the original first control.
For example, fig. 4 is an operation diagram of the electronic device displaying the first control. As shown in fig. 4 (a), the electronic device 00 may display an image 001 of a chair in the image recognition interface 002 and display a control 006 (i.e., a second control), the control 006 including 6 regions each for indicating one user intention, respectively: analysis, arithmetic, modeling, sorting, emergency calls, and others (the area includes other user intents belonging to the job class than the above-mentioned 5). Two identifications of "work" and "life" are also displayed below the control 006, wherein the "work" identification is a dark color to indicate that the currently displayed control 006 is a control of the work mode corresponding to "work". The user can slide in the direction of F1, and in response to the slide input (i.e., the second input) by electronic device 00, as shown in (b) in fig. 4, electronic device 00 switches and displays control 006 as control 003, where control 003 includes 6 regions, each region being for indicating one user intention: encyclopedia, travel, listen, buy, eat, and drink, among others (the area includes other user intentions belonging to the living category than the above-mentioned 5). At this time, the "life" mark further displayed below the control 003 is dark, and it indicates that the currently displayed control 003 is a control corresponding to the life mode of "life".
It can be understood that, in the embodiment of the application, because the electronic device can receive the input of the user and update and display the second control as the first control, the user can set a plurality of controls corresponding to different use scenes according to actual use requirements, and freely switch each control through input. Therefore, the user can conveniently select different controls according to different scenes, the user can more accurately select the intention message, and the message displayed by the electronic equipment meets the requirements of the user.
Second mode
Optionally, the electronic device may display a plurality of intention identifications, and the user may trigger the electronic device to display the first control by inputting a first intention identification of the plurality of intention identifications.
Optionally, with reference to fig. 1, as shown in fig. 5, before the step 102, the image recognition method provided in the embodiment of the present application further includes the following step 107 and step 108, and the step 102 may be specifically implemented by the following step 102 b.
Step 107, the electronic device displays at least one intention identifier.
Wherein each intention identification in the at least one intention identification is used for indicating a type of intention information.
It should be noted that, in the embodiment of the present application, the execution sequence of step 101, and step 107, step 108, and step 102a is not particularly limited. Specifically, the electronic device may first perform step 101, and then sequentially perform step 105, step 106, and step 102 a; the electronic device may also perform step 105, step 106, and step 102a before performing step 101. It should be noted that, the following embodiment is exemplified by performing step 101 first, and then performing step 105, step 106 and step 102a in sequence.
Optionally, in this embodiment of the application, the at least one intention identifier is an identifier indicating a type of intention information. For example, if the intention identifier is "life", the intention identifier "life" indicates intention information of the life class, and specifically may include: buy, travel, learn, eat, drink, listen, etc. (e.g., first intent information in a first control). If the intention identifier is "work", the intention identifier "work" indicates intention information of the work class, and specifically may include: analysis, mathematical computation, modeling, ranking, etc. (e.g., first intent information in a second control).
It should be noted that, in the embodiment of the present invention, a type of intention information indicated by each intention identifier may be displayed in one control, where the first control displays the first intention information of the life class.
Optionally, in this embodiment of the application, the intention identifier may be a default intention identifier of the electronic device system, or may be an intention identifier manually set by the user. That is, the user may use the electronic device for a default classification of intent information, and the user may also set the classification of intent information (e.g., the user may add or delete intent information in the life class indicated by the "life" intent identification). The method can be determined according to actual use requirements, and the embodiment of the application is not particularly limited.
Step 108, the electronic device receives a third input for the first intention identification of the at least one intention identification.
The first intention mark indicates a category of intention information to which M pieces of first intention information included in the first control belong.
Optionally, in this embodiment of the application, the third input is used to select an input of the first intention identifier from the at least one intention identifier. Specifically, the first input may be a touch input to a first intention identifier of the at least one intention identifier, and the touch input may be at least one of: and clicking, double clicking, long pressing and inputting according to the movement of a preset track. The first input may also be a voice input to the electronic device indicating selection of the first intent identifier. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited.
It should be noted that, in this embodiment of the application, the first intention identifier indicates a type of intention information to which M pieces of first intention information included in the first control belong, that is, the M pieces of first intention information included in the first control are information in the intention information indicated by the first intention identifier. For example, the first intention identifier indicates intention information of a life class, and the intention information of the life class may include: the specific first intention information such as "buy", "know", "eat", "travel", namely the intention information indicating the life category of the first intention identifier may include M pieces of first intention information in the first control. It can be appreciated that the first intent identifier indicates a same type of intent information as the first intent information in the first control.
And 102b, the electronic equipment responds to the third input and displays the first control.
It should be noted that, in the embodiment of the present application, reference may be made to the related description in step 102 for displaying the first control and the display manner of displaying the M pieces of first awareness information in the first control, which is not described herein again.
For example, fig. 6 is a second schematic view of the electronic device displaying the first control. As shown in (a) of fig. 6, the electronic device 00 may display an image 001 of a chair in the image recognition interface 002, and display a control 007, where the control 007 includes 4 regions, each region is used to indicate an intention identifier, which is: work, life, spare, sports. The user can click on "life" in the control 007, and the electronic device 00 responds to the click input (i.e., the third input), as shown in fig. 6 (b), the electronic device 00 displays a control 003, the control 003 identifies a corresponding life class control for the "life" intention, and the control 003 includes 6 regions, each region is used for indicating a user intention, respectively: encyclopedia, travel, listen, buy, eat, and drink, among others (the area includes other user intentions belonging to the living category than the above-mentioned 5).
It can be understood that, in the embodiment of the present application, a plurality of intention identifications may be displayed by the electronic device, and the user may trigger the electronic device to display the first control by inputting a first intention identification of the plurality of intention identifications. Therefore, the user can conveniently select different controls according to different scenes and requirements, the user can more accurately select the intention message, and the message displayed by the electronic equipment can better meet the requirements of the user.
Optionally, the first control is a pie chart control, and the image recognition method provided in the embodiment of the present application further includes the following steps 109 and 110.
Step 109, the electronic device obtains the use frequency of each first intention information in the M first intention information, to obtain M use frequencies.
Optionally, in this embodiment of the application, the pie chart control may specifically be a circular control, and the circular control may be divided into M sector areas, where each sector area includes one piece of first intention information (e.g., the control 003 shown in fig. 2).
Optionally, in this embodiment of the application, the pie chart control may also be a hollow circular ring, where the hollow circular ring includes M arc-shaped regions, and each arc-shaped region corresponds to one piece of first intention information. The pie chart control can also be a solid ring, the solid ring comprises M arc-shaped areas, and each arc-shaped area corresponds to first intention information; the central portion of the solid annular ring may include a random control for randomly selecting an intention information from M first intention information corresponding to the M segment arc regions.
Optionally, in this embodiment of the application, the obtaining of the use frequency of each of the M pieces of first intention information may specifically be that, every time a user use period passes, for example, every week, the electronic device counts the use frequency corresponding to each of the M pieces of first intention information in the week to obtain M use frequencies.
And step 110, the electronic device updates the proportion of each piece of first intention information in the pie chart control according to the M using frequencies.
The proportion of each first intention information in the jigsaw control is in direct proportion to the frequency of use of each first intention information.
Optionally, in this embodiment of the application, after the electronic device obtains the usage frequency corresponding to each piece of first intention information in the M pieces of first intention information, that is, obtains M usage frequencies (each usage frequency indicates the usage frequency of the corresponding piece of first intention information in a period of time), the electronic device may adjust, according to the M song usage frequencies, a display area in the pie chart control corresponding to each piece of first intention information, so that the higher frequency of the first intention information is larger in the area of the pie chart control.
It should be noted that, in the embodiment of the present application, the proportion of each first intention information in the puzzle control is proportional to the frequency of use of each first intention information, specifically, the higher the frequency of use of one first intention information is, the greater the proportion of the first intention information in the pie chart control is; the less frequently a first intent information is used, the less the first intent information is in the pie chart control.
Optionally, in this embodiment of the application, in a case that the first control is a pie chart control, the frequency of use of each piece of first intention information may be represented by an area of a corresponding region in the pie chart control. Specifically, the area of the corresponding region in the pie chart control may be the area of a sector region or the area of an annular region. For example, in the case that the area of one first intention information corresponding region is the area of one sector region in the pie chart control, when the frequency of the first intention information corresponding to the sector region is higher, the occupation ratio of the sector region in the pie chart control is also higher. It should be noted that the second control and other controls for intention information in the embodiment of the present application may update the proportion of each first intention information in the corresponding control according to the use frequency of the intention information.
For example, fig. 7 is a schematic diagram of an electronic device updating an intention information ratio in a display control. The example is illustrated by taking control 003 in fig. 2 as an example. During the initial use of the first control 003 by the user, as shown in (a) of fig. 7, 6 regions are uniformly displayed in the control 003, each region is used for indicating a user intention, and respectively: encyclopedia, travel, listen, buy, eat, and drink, among others (the area includes other user intentions belonging to the living category than the above-mentioned 5). After a period of use (e.g., one month) by the user, the electronic device may adjust the control 003 to the control 003 pie chart shown in fig. 7 (b) according to the frequency of use of the 6 user intentions by the user, sort the "buying" intentions frequently used by the user forward and increase the percentage of the "buying" intentions according to the frequency of use, and correspondingly adjust the percentage of the other 5 intentions in the control 003 pie chart according to the frequency of use, such that the larger the percentage of intentions in the control 003 pie chart that are used with a high frequency of use.
It can be understood that, in the embodiment of the present application, the electronic device may acquire a use frequency of each first intention information in the first control, and update a proportion of each first intention information in the pie chart control according to the use frequency. In this way, the electronic device may sort the first intention information in the first control according to the usage habit of the user, for example, the sort frequently used by the user is advanced, and the larger the occupation ratio in the first control is. Therefore, the intelligent degree of the electronic equipment is improved, the electronic equipment is convenient for a user to use, and the use experience of the user is further improved.
Optionally, with reference to fig. 1, as shown in fig. 8, after the step 104, the image recognition method provided in the embodiment of the present application further includes the following steps 111 and 112.
And step 111, the electronic equipment receives a fourth input of the second information.
Optionally, in this embodiment of the application, the fourth input is used to trigger the second information to execute an operation corresponding to the third operation, that is, the user may implement operations such as viewing, copying, and playing details of the second information through the third operation. Specifically, the fourth input may be a touch input to the second message, where the touch input may be any one of: single click, double click, long press, drag along a preset trajectory or in a preset direction, etc. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited.
And step 112, the electronic device responds to the fourth input, and performs an operation corresponding to the fourth input on the second information.
Wherein the operation corresponding to the fourth input includes any one of: and displaying the detail page corresponding to the second information, playing the content corresponding to the second information, and copying the second information.
Optionally, in this embodiment of the application, the displaying of the detail page corresponding to the second information means that the electronic device displays a detail page indicated by the second information, and the page includes detailed content related to the second information. Specifically, the detail page may be any one of the following: a shopping detail interface, an encyclopedia information page of an article, a detail introduction page of a certain content and the like. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited.
Optionally, in this embodiment of the application, the playing of the content corresponding to the second information means that the user can trigger the electronic device to play the video or the audio when the content corresponding to the second information is a video or an audio.
Illustratively, as shown in fig. 2 (b), the electronic device 00 displays information 004 (i.e., second information) related to the purchase of a chair, which is obtained by searching the chair according to the purchase demand, on the interface 001. If the user wants to view the details of the message 004, the user can long-press the message 004, as shown in (c) of fig. 2, and the electronic device 00 can display a purchase page 005 in response to the long-press input (i.e., the fourth input), the purchase page 005 including the details of the message 004 of the chair, and the user can perform a purchase operation on the page.
Illustratively, fig. 9 is a diagram of an electronic device operating on a second message. As shown in fig. 9 (a), in a case where the user photographs the cover of an album, the electronic device 00 may recognize the image 008 of the album and display the image 008 of the album and a control 009 in the interface 002, the control 009 having the track of the album displayed therein. The user may click (i.e., fourth input) on the track "stand in summer" to trigger the electronic device 00 to play the song. As shown in fig. 9 (b), when the user photographs a mathematical problem, the electronic device 00 may recognize the image 010 of the mathematical problem, the electronic device 00 may display the image 010 of the mathematical problem and the control 011 on the interface 002, and the control 011 may display the operation result of the mathematical problem. The user can long press (i.e., fourth input) the control 011 to copy the operation result.
It can be understood that, in the embodiment of the present application, in the case that the electronic device displays the second information, the user may trigger the electronic device to perform an operation corresponding to the fourth input on the second information, such as displaying a detail page, playing corresponding content, copying the second information, and the like. Therefore, the operation of the user can be enriched, the user can operate according to the actual use requirement, the use of the user is facilitated, and the use experience of the user is improved.
Optionally, before the step 101, the image recognition method provided in the embodiment of the present application further includes the following steps 113 to 115.
And step 113, displaying an intention information setting interface by the electronic equipment.
Wherein, the intention information setting interface comprises: setting a control and at least one intention information.
It should be noted that, in the embodiment of the present application, a user may trigger the electronic device to display an intention information setting interface. The intention information setting interface comprises a setting control and at least one intention information. The setting control is in a state when the intention information is not added to the first control or the second control, and the user can add the intention information to the setting control according to needs, for example, select the intention information from at least one displayed intention information and add the intention information to the setting control.
Step 114, the electronic device receives a fifth input.
Wherein the fifth input is an input to the setting control and a third intention information of the at least one intention information.
Optionally, in this embodiment of the application, the fifth input is an input that the user adds third intention information in the at least one intention information to the setting control. Specifically, the fourth input may be an input in which the user drags the third intention information to the setting control, and the fifth input may also be an input in which the user selects the third intention information and selects a corresponding position in the setting control to add the third intention information. The determination may be specifically performed according to actual use requirements, and the embodiment of the present application is not specifically limited.
Step 115, the electronic device displays the third intention information in the setting control in response to the fifth input.
It should be noted that, in the embodiment of the present application, the intention information in the first control and the intention information in the second control are both set according to the setting method in steps 113 to 115. The user may repeat the above steps for the first control or the second control to re-edit the intent information in the first control or the second control (e.g., perform a modification operation on the intent information included in the first control or the second control).
Optionally, in this embodiment of the application, after step 115, the electronic device may further store the setting of the setting control, and name the setting control, so as to facilitate selection of different controls according to different usage scenarios in a subsequent usage process. For example, the electronic device triggers a first manner and a second manner of selecting the first control.
For example, fig. 10 is an operation diagram for setting intention information in a control of an electronic device. As shown in (a) of fig. 10, in the case where the electronic apparatus 00 displays the intention information setting interface 012, the interface may display a setting control 013 and a control 014, the control 014 including a plurality of intention information. The user can select intention information (i.e. third intention information) required by the user from the plurality of intention information and drag the selected intention information into the control 013, and the control 013 displays the intention information dragged by the user, and so on. As shown in (b) in fig. 10, in the case that intentional information is set in all regions of the control 013, the electronic device 00 displays the control 015, and the user may click on the "save" virtual key in the control 015 to trigger the electronic device to save the above settings of the user.
It can be understood that, in the embodiment of the present application, in the case that the electronic device displays the intention information setting interface, the electronic device may receive an input from the user to add the intention information (e.g., the third intention information) required by the user to the setting control. The control comprises intention information required by the user, and the user can select the required intention information under the condition that the user triggers the electronic equipment to display the intention information in the control, so that the electronic equipment searches according to the intention information and displays the information required by the user. And then can set up the controlling part according to in-service use to convenience of customers uses, promotes user's use and experiences.
It should be noted that, in the image recognition method provided in the embodiment of the present application, the execution subject may be an image recognition apparatus, or a control module in the image recognition apparatus for executing the loaded image recognition method. In the embodiment of the present application, an image recognition apparatus executes a loaded image recognition method as an example, and the image recognition method provided in the embodiment of the present application is described.
As shown in fig. 11, an embodiment of the present application provides an image recognition apparatus 1100. The image recognition apparatus 1100 may include: a processing module 1101, a display module 1102 and a receiving module 1103. The processing module 1101 may be configured to identify a target image and obtain first information. The display module 1102 may be configured to display a first control, where the first control includes M pieces of first intention information, each piece of first intention information is used to indicate a user intention, and M is a positive integer. The receiving module 1103 may be configured to receive a first input of target intention information in the M pieces of first intention information. The display module 1102 may be further configured to display, in response to the first input received by the receiving module 1103, second information that is obtained by searching the first information according to the target intention information.
Optionally, in this embodiment of the application, the display module 1102 may be further configured to display a second control, where the second control includes N pieces of second intention information, and N is a positive integer. The receiving module 1103 may be further configured to receive a second input. The display module 1102 may be specifically configured to update and display the second control as the first control in response to the second input received by the receiving module 1103.
Optionally, with reference to fig. 11, as shown in fig. 12, the first control is a pie chart control; the image recognition apparatus 1100 may further include an acquisition module 1104. The obtaining module 1104 may be configured to obtain a usage frequency of each of the M pieces of first intention information, so as to obtain M usage frequencies. The processing module 1101 may be further configured to update the proportion of each first intention information in the pie chart control according to the M usage frequencies obtained by the obtaining module 1104. Wherein the proportion of each first intention information in the puzzle control is proportional to the frequency of use of each first intention information.
Optionally, in this embodiment of the application, the display module 1102 may be further configured to display at least one intention identifier, where each intention identifier is used to indicate a type of intention information. The receiving module 1103 may be configured to receive a third input of a first intent identifier of the at least one intent identifier, where the first intent identifier indicates a type of intent information to which M pieces of first intent information included in the first control belong. The display module 1102 may be specifically configured to display the first control in response to the third input received by the receiving module 1103.
Optionally, in this embodiment of the application, the receiving module 1103 may be further configured to receive a fourth input of the second information. The processing module 1101 may be further configured to, in response to a fourth input received by the receiving module 1103, perform an operation corresponding to the fourth input on the second information. Wherein the operation comprises any one of: and displaying the detail page corresponding to the second information, playing the content corresponding to the second information, and copying the second information.
Optionally, in this embodiment of the application, the display module 1102 may be further configured to display an intention information setting interface, where the intention information setting interface includes: setting a control and at least one intention information. The receiving module 1103 may be further configured to receive a fifth input, where the fifth input is an input to the setting control and a third intention information of the at least one intention information. The display module 1102 may be further configured to display the third intention information in the setting control in response to the fifth input received by the receiving module 1103.
The image recognition device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiment of the present application is not particularly limited.
The image recognition apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image recognition device provided in the embodiment of the present application can implement each process implemented by the image recognition device in the method embodiments of fig. 1 to 12, and is not described here again to avoid repetition.
An image recognition apparatus according to an embodiment of the present application may display, after the image recognition apparatus recognizes a target image to obtain first information and displays M pieces of first intention information (each piece of the first intention information is used to indicate a user intention, and M is a positive integer) in a first control, second information obtained by searching the first information according to the target intention information in a case where the image recognition apparatus receives a first input of the target intention information among the M pieces of first intention information. According to the scheme, when the image recognition device searches the first information recognized from the target image, the image recognition device can search according to the target intention information selected by the user, and the target intention information can accurately reflect the actual requirement of the user for triggering the search of the first information, so that the image recognition device can better meet the actual requirement of the user for the second information obtained by searching the first information according to the target intention information selected by the user, and the image recognition device can more accurately provide the information meeting the actual requirement of the user for the user. The intelligent degree of image recognition of the electronic equipment is improved, and the use experience of a user is improved.
Optionally, as shown in fig. 13, an electronic device 1300 is further provided in an embodiment of the present application, and includes a processor 1302, a memory 1301, and a program or an instruction stored in the memory 1301 and capable of running on the processor 1302, where the program or the instruction is executed by the processor 1302 to implement each process of the embodiment of the image recognition method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 14 is a schematic hardware structure diagram of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 14 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The processor 1010 may be configured to identify a target image and obtain first information. The display unit 1006 may be configured to display a first control, where the first control includes M pieces of first intention information, each piece of first intention information is used to indicate a user intention, and M is a positive integer. The user input unit 1007 may be configured to receive a first input of target intention information among the M pieces of first intention information. The display unit 1006 may be further configured to display second information in response to the first input received by the user input unit 1007, where the second information is obtained by searching the first information according to the target intention information.
In an electronic device provided by an embodiment of the present application, after the electronic device identifies a target image to obtain first information and displays M pieces of first intention information (each piece of first intention information is used to indicate a user intention, and M is a positive integer) in a first control, in a case where the electronic device receives a first input of target intention information in the M pieces of first intention information, the electronic device may display second information obtained by searching the first information according to the target intention information. According to the scheme, when the electronic equipment searches the first information identified from the target image, the electronic equipment can search according to the target intention information selected by the user, and the target intention information can accurately reflect the actual requirement of the user for triggering the search of the first information, so that the second information obtained by searching the first information according to the target intention information selected by the user by the electronic equipment can better meet the actual requirement of the user, and the electronic equipment can more accurately provide the information meeting the actual requirement of the user for the user. The intelligent degree of image recognition of the electronic equipment is improved, and the use experience of a user is improved.
Optionally, in this embodiment of the application, the display unit 1006 may be further configured to display a second control, where the second control includes N pieces of second intention information, and N is a positive integer. The user input unit 1007 may also be used to receive a second input. The display unit 1006 may be specifically configured to update and display the second control as the first control in response to the second input received by the user input unit 1007.
It can be understood that, in the embodiment of the application, because the electronic device can receive the input of the user and update and display the second control as the first control, the user can set a plurality of controls corresponding to different use scenes according to actual use requirements, and freely switch each control through input. Therefore, the user can conveniently select different controls according to different scenes, the user can more accurately select the intention message, and the message displayed by the electronic equipment meets the requirements of the user.
Optionally, in this embodiment of the present application, the first control is a pie chart control. The sensor 1005 may be configured to acquire a usage frequency of each of the M pieces of first intention information, resulting in M usage frequencies. The processor 1010 is further configured to update the occupancy of each first intention information in the pie chart control according to the M usage frequencies obtained by the sensor 1005. Wherein the proportion of each first intention information in the puzzle control is proportional to the frequency of use of each first intention information.
It can be understood that, in the embodiment of the present application, the electronic device may acquire a use frequency of each first intention information in the first control, and update a proportion of each first intention information in the pie chart control according to the use frequency. In this way, the electronic device may sort the first intention information in the first control according to the usage habit of the user, for example, the sort frequently used by the user is advanced, and the larger the occupation ratio in the first control is. Therefore, the intelligent degree of the electronic equipment is improved, the electronic equipment is convenient for a user to use, and the use experience of the user is further improved.
Optionally, in this embodiment of the application, the display unit 1006 may be further configured to display at least one intention identifier, where each intention identifier is used to indicate a type of intention information. The user input unit 1007 may be configured to receive a third input for a first intention identifier of the at least one intention identifier, where the first intention identifier indicates a type of intention information to which the M first intention information included in the first control belongs. The display unit 1006 may be specifically configured to display the first control in response to the third input received by the user input unit 1007.
It can be understood that, in the embodiment of the present application, a plurality of intention identifications may be displayed by the electronic device, and the user may trigger the electronic device to display the first control by inputting a first intention identification of the plurality of intention identifications. Therefore, the user can conveniently select different controls according to different scenes and requirements, the user can more accurately select the intention message, and the message displayed by the electronic equipment can better meet the requirements of the user.
Optionally, in this embodiment of the application, the user input unit 1007 may be further configured to receive a fourth input of the second information. The processor 1010 may be further configured to perform an operation corresponding to a fourth input on the second information in response to the fourth input received by the user input unit 1007. Wherein the operation comprises any one of: and displaying the detail page corresponding to the second information, playing the content corresponding to the second information, and copying the second information.
It can be understood that, in the embodiment of the present application, in the case that the electronic device displays the second information, the user may trigger the electronic device to perform an operation corresponding to the fourth input on the second information, such as displaying a detail page, playing corresponding content, copying the second information, and the like. Therefore, the operation of the user can be enriched, the user can operate according to the actual use requirement, the use of the user is facilitated, and the use experience of the user is improved.
Optionally, in this embodiment of the application, the display unit 1006 may be further configured to display an intention information setting interface, where the intention information setting interface includes: setting a control and at least one intention information. The user input unit 1007 may be further configured to receive a fifth input, which is an input of the setting control and third intention information of the at least one intention information. The display unit 1006 may be further configured to display the third intention information in the setting control in response to the fifth input received by the user input unit 1007.
It can be understood that, in the embodiment of the present application, in the case that the electronic device displays the intention information setting interface, the electronic device may receive an input from the user to add the intention information (e.g., the third intention information) required by the user to the setting control. The control comprises intention information required by the user, and the user can select the required intention information under the condition that the user triggers the electronic equipment to display the intention information in the control, so that the electronic equipment searches according to the intention information and displays the information required by the user. And then can set up the controlling part according to in-service use to convenience of customers uses, promotes user's use and experiences.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image recognition method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the image recognition method embodiment, and can achieve the same technical effect, and in order to avoid repetition, the description is omitted here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. An image recognition method, characterized in that the method comprises:
identifying a target image to obtain first information;
displaying a first control, wherein the first control comprises M pieces of first intention information, each piece of first intention information is used for indicating a user intention, and M is a positive integer;
receiving a first input of target intention information of the M pieces of first intention information;
and responding to the first input, and displaying second information, wherein the second information is obtained by searching the first information according to the target intention information.
2. The method of claim 1, wherein prior to displaying the first control, the method further comprises:
displaying a second control, wherein the second control comprises N pieces of second intention information, and N is a positive integer;
receiving a second input;
the displaying a first control includes:
in response to the second input, displaying the second control update as the first control.
3. The method of claim 2, wherein the first control is a pie chart control; the method further comprises the following steps:
obtaining the use frequency of each first intention information in the M first intention information to obtain M use frequencies;
updating the proportion of each first intention information in the pie chart control according to the M using frequencies;
wherein a ratio of each first intention information in the puzzle control is proportional to a frequency of use of the each first intention information.
4. The method of claim 1, wherein prior to displaying the first control, the method further comprises:
displaying at least one intention identifier, wherein each intention identifier is used for indicating one type of intention information;
receiving a third input to a first intent identifier of the at least one intent identifier, the first intent identifier indicating a class of intent information to which the M first intent information included by the first control belongs;
the displaying a first control includes:
in response to the third input, displaying the first control.
5. The method of claim 1, wherein after displaying the second information, the method further comprises:
receiving a fourth input to the second information;
in response to the fourth input, performing an operation corresponding to the fourth input on the second information;
wherein the operation comprises any one of: and displaying a detail page corresponding to the second information, playing the content corresponding to the second information, and copying the second information.
6. An image recognition apparatus, characterized in that the apparatus comprises: the device comprises a processing module, a display module and a receiving module;
the processing module is used for identifying a target image to obtain first information;
the display module is used for displaying a first control, the first control comprises M pieces of first intention information, each piece of first intention information is used for indicating a user intention, and M is a positive integer;
the receiving module is used for receiving a first input of target intention information in the M pieces of first intention information;
the display module is further configured to display second information in response to the first input received by the receiving module, where the second information is obtained by searching the first information according to the target intention information.
7. The apparatus according to claim 6, wherein the display module is further configured to display a second control, where the second control includes N pieces of second intention information, and N is a positive integer;
the receiving module is further used for receiving a second input;
the display module is specifically configured to update and display the second control as the first control in response to the second input received by the receiving module.
8. The apparatus of claim 7, wherein the first control is a pie chart control; the device also comprises an acquisition module;
the acquisition module is configured to acquire a use frequency of each piece of first intention information in the M pieces of first intention information to obtain M use frequencies;
the processing module is further configured to update the proportion of each piece of first intention information in the pie chart control according to the M use frequencies obtained by the obtaining module;
wherein a ratio of each first intention information in the puzzle control is proportional to a frequency of use of the each first intention information.
9. The apparatus of claim 6, wherein the display module is further configured to display at least one intention identifier, each intention identifier being used to indicate a type of intention information;
the receiving module is configured to receive a third input of a first intention identifier of the at least one intention identifier, where the first intention identifier indicates a class of intention information to which the M pieces of first intention information included in the first control belong;
the display module is specifically configured to display the first control in response to the third input received by the receiving module.
10. The apparatus of claim 6, wherein the receiving module is further configured to receive a fourth input of the second information;
the processing module is further configured to, in response to the fourth input received by the receiving module, perform an operation corresponding to the fourth input on the second information;
wherein the operation comprises any one of: and displaying a detail page corresponding to the second information, playing the content corresponding to the second information, and copying the second information.
11. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the image recognition method according to any one of claims 1 to 5.
12. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the image recognition method according to any one of claims 1 to 5.
CN202010457926.9A 2020-05-26 2020-05-26 Image recognition method and device and electronic equipment Pending CN111638846A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010457926.9A CN111638846A (en) 2020-05-26 2020-05-26 Image recognition method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010457926.9A CN111638846A (en) 2020-05-26 2020-05-26 Image recognition method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN111638846A true CN111638846A (en) 2020-09-08

Family

ID=72331080

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010457926.9A Pending CN111638846A (en) 2020-05-26 2020-05-26 Image recognition method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN111638846A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112463106A (en) * 2020-11-12 2021-03-09 深圳Tcl新技术有限公司 Voice interaction method, device and equipment based on intelligent screen and storage medium
CN113194024A (en) * 2021-03-22 2021-07-30 维沃移动通信(杭州)有限公司 Information display method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090055356A1 (en) * 2007-08-23 2009-02-26 Kabushiki Kaisha Toshiba Information processing apparatus
CN104881451A (en) * 2015-05-18 2015-09-02 百度在线网络技术(北京)有限公司 Image searching method and image searching device
CN105426535A (en) * 2015-12-18 2016-03-23 北京奇虎科技有限公司 Searching method and device based on searching tips
CN109040461A (en) * 2018-08-29 2018-12-18 优视科技新加坡有限公司 A kind of method and device for business processing based on Object identifying
CN109407916A (en) * 2018-08-27 2019-03-01 华为技术有限公司 Method, terminal, user images display interface and the storage medium of data search

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090055356A1 (en) * 2007-08-23 2009-02-26 Kabushiki Kaisha Toshiba Information processing apparatus
CN104881451A (en) * 2015-05-18 2015-09-02 百度在线网络技术(北京)有限公司 Image searching method and image searching device
CN105426535A (en) * 2015-12-18 2016-03-23 北京奇虎科技有限公司 Searching method and device based on searching tips
CN109407916A (en) * 2018-08-27 2019-03-01 华为技术有限公司 Method, terminal, user images display interface and the storage medium of data search
CN109040461A (en) * 2018-08-29 2018-12-18 优视科技新加坡有限公司 A kind of method and device for business processing based on Object identifying

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112463106A (en) * 2020-11-12 2021-03-09 深圳Tcl新技术有限公司 Voice interaction method, device and equipment based on intelligent screen and storage medium
CN113194024A (en) * 2021-03-22 2021-07-30 维沃移动通信(杭州)有限公司 Information display method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN111813284B (en) Application program interaction method and device
CN111897468B (en) Message processing method and device, electronic equipment and readable storage medium
CN111638846A (en) Image recognition method and device and electronic equipment
CN113805996A (en) Information display method and device
CN111831181A (en) Application switching display method and device and electronic equipment
CN112788178B (en) Message display method and device
CN113268182B (en) Application icon management method and electronic device
CN114443203A (en) Information display method and device, electronic equipment and readable storage medium
CN113114845B (en) Notification message display method and device
CN112596643A (en) Application icon management method and device
CN112328829A (en) Video content retrieval method and device
WO2023138475A1 (en) Icon management method and apparatus, and device and storage medium
CN116069432A (en) Split screen display method and device, electronic equipment and readable storage medium
CN113032163A (en) Resource collection method and device, electronic equipment and medium
CN113779288A (en) Photo storage method and device
CN112732961A (en) Image classification method and device
CN113326233A (en) Method and device for arranging folders
CN111796733A (en) Image display method, image display device and electronic equipment
CN111752428A (en) Icon arrangement method and device, electronic equipment and medium
CN111813285B (en) Floating window management method and device, electronic equipment and readable storage medium
CN111831188B (en) Information display method, device, equipment and medium
CN113393373B (en) Icon processing method and device
CN113885765A (en) Screenshot picture association method and device and electronic equipment
CN112925576A (en) Article link processing method and device, electronic equipment and storage medium
CN115756278A (en) Unread message processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200908

RJ01 Rejection of invention patent application after publication