CN111046211A - Article searching method and electronic equipment - Google Patents

Article searching method and electronic equipment Download PDF

Info

Publication number
CN111046211A
CN111046211A CN201911357340.9A CN201911357340A CN111046211A CN 111046211 A CN111046211 A CN 111046211A CN 201911357340 A CN201911357340 A CN 201911357340A CN 111046211 A CN111046211 A CN 111046211A
Authority
CN
China
Prior art keywords
image
information
storage
stored
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911357340.9A
Other languages
Chinese (zh)
Inventor
冯宇腾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201911357340.9A priority Critical patent/CN111046211A/en
Publication of CN111046211A publication Critical patent/CN111046211A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/587Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location

Abstract

The embodiment of the invention provides an article searching method and electronic equipment, relates to the technical field of communication, and can solve the problems that the process of searching articles by the electronic equipment is complex and time-consuming, and the man-machine interaction performance is poor. The scheme comprises the following steps: acquiring first acquisition information, wherein the first acquisition information is information of a first article input by a user; acquiring a first acquired image, wherein the first acquired image is an acquired live-action image; under the condition that the first acquisition information is matched with first storage information stored in the electronic equipment and the first acquisition image is matched with a first storage image stored in the electronic equipment, displaying target prompt information, wherein the target prompt information is used for prompting a user that a first article is located in a real-scene area corresponding to the first acquisition image; the first storage information and the first storage image are correspondingly stored in the electronic equipment, the first storage information is information of the first article, and the first storage image is an image of a storage area of the first article. The scheme is applied to the scene of finding the articles.

Description

Article searching method and electronic equipment
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to an article searching method and electronic equipment.
Background
With the improvement of living standard, more and more articles are purchased by users, and after the users place the articles at different positions, the users often forget the positions where the articles are stored if the users need to use the articles again.
Currently, a user can record the storage location of an article in a text form through a notepad application (hereinafter, referred to as a notepad) in an electronic device, so that the user can conveniently find the article. Specifically, if a user needs to search for an item, the user may search for text information describing a storage location of the item in the notepad, and then the user may search for the item according to the text information.
However, because the number of the articles is large, a large amount of text information describing the storage positions of various articles may exist in the notepad of the electronic device, and if a user needs to search for a certain article, the user may search for the text information describing the storage position of the article in the text information, and then search for the article according to the text information, so that the process of searching for the article is complicated and time-consuming, and the human-computer interaction performance is poor.
Disclosure of Invention
The embodiment of the invention provides an article searching method and electronic equipment, and aims to solve the problems that the process of searching articles by the electronic equipment is complicated and time-consuming, and the man-machine interaction performance is poor.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present invention provides an article search method, where the method is applied to an electronic device, and the method includes: acquiring first acquisition information; collecting a first collected image; and displaying the target prompt information under the condition that the first acquisition information is matched with first storage information stored in the electronic equipment and the first acquisition image is matched with a first storage image stored in the electronic equipment. The first acquisition information is information of a first article input by a user; the first collected image is a collected real image; the target prompt information is used for prompting a user that the first article is located in the live-action area corresponding to the first collected image; the first storage information and the first storage image are correspondingly stored in the electronic equipment, the first storage information is information of the first article, and the first storage image is an image of a storage area of the first article.
In a second aspect, an embodiment of the present invention provides an electronic device, which includes an obtaining module, an acquiring module, and a processing module. The acquisition module is used for acquiring first acquisition information, wherein the first acquisition information is information of a first article input by a user; the acquisition module is used for acquiring a first acquired image, and the first acquired image is an acquired live-action image; the processing module is used for displaying target prompt information under the condition that the first acquisition information acquired by the acquisition module is matched with first storage information stored in the electronic equipment and the first acquisition image acquired by the acquisition module is matched with a first storage image stored in the electronic equipment, wherein the target prompt information is used for prompting a user that the first article is located in a real scene area corresponding to the first acquisition image, the first storage information and the first storage image are correspondingly stored in the electronic equipment, the first storage information is information of the first article, and the first storage image is an image of a storage area of the first article.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor, a memory, and a computer program stored on the memory and operable on the processor, and when executed by the processor, the electronic device implements the steps of the item searching method in the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the steps of the item searching method in the first aspect.
In this embodiment of the present invention, after the electronic device acquires the first captured information (which is information of the first article input by the user) and captures the first captured image (which is a captured live-action image), if the first captured information matches with the first stored information (which is information of the first article) stored in the electronic device and the first captured image matches with the first stored image (which is an image of the storage area of the first article) stored in the electronic device, the electronic device may display target prompt information for prompting the user that the first article is located in the live-action area corresponding to the first captured image. The first storage information and the first storage image are correspondingly stored in the electronic equipment. According to the scheme, the electronic equipment can search the storage information and the storage image matched with the information and the live-action image in the electronic equipment according to the information of the object to be searched input by the user and the acquired live-action image of the scene where the electronic equipment is located, and after the electronic equipment searches the storage information and the storage image matched with the information and the live-action image, the object to be searched can be shown to be stored in the live-action area corresponding to the live-action image, so that the electronic equipment can display prompt information to prompt the user that the object to be searched is stored in the live-action area corresponding to the live-action image, and the user can find the object in the live-action area. Therefore, the user can trigger the electronic equipment to display the specific storage position of the object to the user by inputting the information of the object to be searched, and the specific storage position of the object does not need to be recorded and searched through the notebook application program, so that the process of searching the object can be simplified, the efficiency of searching the object is improved, and the human-computer interaction performance is improved.
Drawings
Fig. 1 is a schematic structural diagram of an android operating system according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an article searching method according to an embodiment of the present invention;
fig. 3 is one of schematic interfaces of an application of an article search method according to an embodiment of the present invention;
fig. 4 is a second schematic diagram of an article searching method according to an embodiment of the present invention;
fig. 5 is a second schematic interface diagram of an application of the article search method according to the embodiment of the present invention;
fig. 6 is a third schematic interface diagram of an application of the article search method according to the embodiment of the present invention;
fig. 7 is a fourth schematic interface diagram of an application of the item searching method according to the embodiment of the present invention;
fig. 8 is a fifth schematic interface diagram of an application of the item searching method according to the embodiment of the present invention;
fig. 9 is a sixth schematic view of an interface applied in an article search method according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 11 is a hardware schematic diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The term "and/or" herein is an association relationship describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The symbol "/" herein denotes a relationship in which the associated object is or, for example, a/B denotes a or B.
The terms "first" and "second," etc. herein are used to distinguish between different objects and are not used to describe a particular order of objects. For example, the first input and the second input, etc. are for distinguishing different inputs, rather than for describing a particular order of inputs.
In the embodiments of the present invention, words such as "exemplary" or "for example" are used to mean serving as examples, illustrations or descriptions. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present concepts related in a concrete fashion.
In the description of the embodiments of the present invention, unless otherwise specified, "a plurality" means two or more, for example, a plurality of elements means two or more elements, and the like.
The embodiment of the invention provides an article searching method and electronic equipment. Specifically, after the electronic device acquires first acquisition information (information of a first article input by a user) and acquires a first acquisition image (an acquired live-action image), if the first acquisition information matches with first storage information (information of the first article) stored in the electronic device and the first acquisition image matches with a first storage image (an image of a storage area of the first article) stored in the electronic device, the electronic device may display target prompt information for prompting the user that the first article is located in the live-action area corresponding to the first acquisition image. The first storage information and the first storage image are correspondingly stored in the electronic equipment. According to the scheme, the electronic equipment can search the storage information and the storage image matched with the information and the live-action image in the electronic equipment according to the information of the object to be searched input by the user and the acquired live-action image of the scene where the electronic equipment is located, and after the electronic equipment searches the storage information and the storage image matched with the information and the live-action image, the object to be searched can be shown to be stored in the live-action area corresponding to the live-action image, so that the electronic equipment can display prompt information to prompt the user that the object to be searched is stored in the live-action area corresponding to the live-action image, and the user can find the object in the live-action area. Therefore, the user can trigger the electronic equipment to display the specific storage position of the object to the user by inputting the information of the object to be searched, and the specific storage position of the object does not need to be recorded and searched through the notebook application program, so that the process of searching the object can be simplified, the efficiency of searching the object is improved, and the human-computer interaction performance is improved.
The electronic device in the embodiment of the invention may be an electronic device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present invention are not limited in particular.
The following describes a software environment applied to the article search method provided by the embodiment of the present invention, by taking an android operating system as an example.
Fig. 1 is a schematic diagram of an architecture of a possible android operating system according to an embodiment of the present invention. In fig. 1, the architecture of the android operating system includes 4 layers, which are respectively: an application layer, an application framework layer, a system runtime layer, and a kernel layer (specifically, a Linux kernel layer).
The application program layer comprises various application programs (including system application programs and third-party application programs) in an android operating system.
The application framework layer is a framework of the application, and a developer can develop some applications based on the application framework layer under the condition of complying with the development principle of the framework of the application. For example, an application program for capturing the first captured image in the embodiment of the present invention may be developed based on an application program framework. Such as a camera application.
Generally, the camera application program in the embodiment of the present invention may include two parts, where one part is a service (service) running in a background of the electronic device, and may be used to start a camera to collect a live-action image; the other part is the content displayed on the screen of the electronic device. For example, a live view image captured by a camera displayed on a screen of the electronic device.
The system runtime layer includes libraries (also called system libraries) and android operating system runtime environments. The library mainly provides various resources required by the android operating system. The android operating system running environment is used for providing a software environment for the android operating system.
The kernel layer is an operating system layer of an android operating system and belongs to the bottommost layer of an android operating system software layer. The kernel layer provides kernel system services and hardware-related drivers for the android operating system based on the Linux kernel.
Taking an android operating system as an example, in the embodiment of the present invention, a developer may develop a software program for implementing the article searching method provided in the embodiment of the present invention based on the system architecture of the android operating system shown in fig. 1, so that the article searching method may operate based on the android operating system shown in fig. 1. That is, the processor or the electronic device may implement the item searching method provided by the embodiment of the present invention by running the software program in the android operating system.
The electronic device in the embodiment of the invention can be a mobile electronic device or a non-mobile electronic device. For example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted terminal, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiment of the present invention is not particularly limited.
Optionally, the electronic device in the embodiment of the present invention may be an electronic device having an Augmented Reality (AR) function. For example, AR glasses, AR helmets, or AR watches. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
The execution subject of the article searching method provided by the embodiment of the present invention may be the electronic device, or may also be a functional module and/or a functional entity capable of implementing the article searching method in the electronic device, which may be specifically determined according to actual use requirements, and the embodiment of the present invention is not limited. The following takes an electronic device as an example to exemplarily explain the item searching method provided by the embodiment of the present invention.
The article searching method provided by the embodiment of the invention can be applied to scenes of searching any article. For example, the article searching method provided by the embodiment of the invention can be applied to a scene of searching for documents in an office, a scene of searching for articles for daily use at home, a scene of searching for commodities in a supermarket, a scene of searching for automobiles in a parking lot, a scene of searching for medicines in a pharmacy, and the like. That is, it is understood that the articles in the embodiments of the present invention may be any possible articles such as documents, living goods, commodities, automobiles, medicines, and the like.
Taking the example of finding a certain article at home, when a user needs to use or view the certain article at home but does not know the specific storage location of the article at home, the user may input information of the article in a shooting preview interface displayed by the electronic device, and trigger the electronic device to acquire real-scene images of various areas (i.e., scenes where the electronic device is currently located) at home in real time through the camera. The electronic device may then look up stored information and stored images from the information and images stored by the electronic device that match the information of the item and the live-action image. After the electronic device finds the stored information (hereinafter referred to as information a) and the stored image (hereinafter referred to as image a) matching the information and the live-action image, the electronic device may display a prompt message prompting the user that the item is stored in the live-action area corresponding to the live-action image, so that the user may find the item in the live-action area at home. Therefore, the user can trigger the electronic equipment to display the specific storage position of the object to the user by inputting the information of the object to be searched, and the specific storage position of the object does not need to be recorded and searched through the notebook application program, so that the process of searching the object can be simplified, the efficiency of searching the object is improved, and the human-computer interaction performance is improved.
It should be noted that, in the embodiment of the present invention, in the above method for finding an article, before a user finds an article through an electronic device, the user may first trigger the electronic device to store information a and image a correspondingly. Therefore, when the user inputs the information of the article in the shooting preview interface displayed by the electronic device, the electronic device can search the information A matched with the information of the article in the electronic device, and further can acquire the image A stored corresponding to the information A.
The following describes an article searching method according to an embodiment of the present invention with reference to the drawings.
The method for searching for the article provided by the embodiment of the invention can comprise two processes, wherein one process is a process of storing the information of the article by the electronic equipment, and the other process is a process of searching the information of the article by the electronic equipment. These two processes are described below as examples.
The first process is as follows: the electronic device stores information of the item.
As shown in fig. 2, the first corresponding process, the method for finding an item according to the embodiment of the present invention may include the following steps S201 to S204.
S201, the electronic equipment collects a first storage image.
In the embodiment of the invention, under the condition that the electronic equipment displays the shooting preview interface, a user can trigger the electronic equipment to acquire the first storage image.
Optionally, in the embodiment of the present invention, the user may trigger the electronic device to acquire the first storage image by clicking the shooting control (as shown by 50 in fig. 5).
In an embodiment of the present invention, after the user triggers the electronic device to capture the first stored image, the electronic device may display a prompt message (e.g., the first prompt message). The first prompt message may be used to prompt the user to determine whether to store the first stored image in the general album or the specific album.
It should be noted that a common album is usually visible to the user, and the common album is stored in the gallery application. The specific photo album may or may not be visible to the user, and the specific photo album may be saved in the gallery application or in a specific storage area of the electronic device. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
Optionally, in this embodiment of the present invention, the first prompt information may include first prompt content and a first prompt option. The first prompt contents may be used to prompt the user to determine whether to store the first stored image to a general album or a specific album. The first prompt option may include a first option and a second option; the first option may be used to determine to store the first stored image in the general album, that is, the user's input of the first option may be used to determine to store the first stored image in the general album; the second option may be used to determine that the first stored image is stored to a particular album, i.e., user input of the second option may be used to determine that the first stored image is stored to a particular album.
Illustratively, as shown in fig. 3, the first prompt message may be "store the current image to? "(as shown at 30 in fig. 3), the first option may be a" general album "option (as shown at 31 in fig. 3), and the second option may be a" specific album "option (as shown at 32 in fig. 3).
Optionally, in an embodiment of the present invention, the first stored image may be an image of an open article. For example: an image of a cabinet with the cabinet door open, an image of a drawer in an open state, or a shelf image as shown by storage image 83 in fig. 8, and the like. Alternatively, the first stored image may be an image of an article of storage. For example: an image of a box, an image of a cabinet, or an image of a box.
In the embodiment of the invention, after the electronic equipment displays the first prompt message, the user can execute an input on the second option to trigger the electronic equipment to store the first storage image into the specific album. And the electronic device may display the first stored image, and the user may then trigger the electronic device to identify the images of certain item(s) in the first stored image by input of the images of those items, so that information about those items may be obtained.
It should be noted that, in the embodiment of the present invention, after the user performs an input on the first option, the electronic device may store the first stored image in the general album, so that the operation of normally taking a photo may be completed.
S202, the electronic equipment receives a fifth input of the user.
The fifth input may be input by the user to the image of the first item in the first stored image. It will be appreciated that the first stored image is an image of a storage area for the first item.
Optionally, in this embodiment of the present invention, the fifth input may be a click input of the user on the image of the first item, a long-press input of the user on the image of the first item, a re-press input of the user on the image of the first item, or the like. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
S203, the electronic equipment responds to the fifth input, identifies the image of the first article, and obtains first storage information.
In an embodiment of the present invention, after the electronic device receives a fifth input from the user, the electronic device may determine, in response to the fifth input, an input position of the fifth input, and determine an image located at the input position, that is, an image of the first article, and then the electronic device may identify the image of the first article by using an image recognition technology, to obtain information of the first article, that is, the first stored information.
Optionally, in this embodiment of the present invention, the first storage information may include at least one of the following: text information of the first article and picture information of the first article.
The electronic device may obtain the text information of the first article by using an object recognition technology under the condition that the information of the first article includes the text information, and obtain the picture information of the first article by using an edge detection technology under the condition that the information of the first article includes the picture information.
S204, the electronic equipment correspondingly stores the first storage information and the first storage image.
In the embodiment of the invention, after the electronic device obtains the first storage information, the electronic device can correspondingly store the first storage information and the first storage image.
Optionally, in this embodiment of the present invention, a plurality of users may share the storage information and the storage image stored in one data, and in a case where the plurality of users share the storage information and the storage image stored in one data, the plurality of users may form a group (hereinafter referred to as a shared article group). Wherein each member in the group can trigger the electronic device to store the stored information and the stored image in the same database in the server, so that the members in the group can share the stored information and the stored image stored in the database.
Optionally, in this embodiment of the present invention, a user of the electronic device may trigger the electronic device to establish a plurality of shared article groups. The storage information and the storage image stored by the user-triggered electronic device in each shared article group in the plurality of shared article groups are stored in the same database of the server, and the storage information and the storage image stored by the user-triggered electronic device in different shared article groups are stored in different databases of the server.
In the embodiment of the present invention, the group including the user of the electronic device in the shared article group is referred to as a target shared article group.
After the electronic device correspondingly stores the first storage information and the first storage image in the electronic device, the electronic device may transmit the first storage information and the first storage image to the server. The server may then match the stored information and stored image in the server's target database (the database corresponding to the target group of shared items) with the lookup first stored information and first stored image. On one hand, if the server does not find the storage information and the storage image matched with the first storage information and the first storage image in the target database, the server may store the first storage information and the first storage image, and may send first indication information to the electronic device to indicate the electronic device to inquire whether a user of the electronic device shares the first storage information and the first storage image with other users in the target shared article group. After the electronic device receives the first indication information, the electronic device may display a prompt message (e.g., a second prompt message) to prompt the user whether to share the first stored information and the first stored image with other users in the target shared item group. After the user agrees to share, that is, the electronic device receives an instruction that the user agrees to share, the electronic device may send second indication information to the server, and instruct the server to set the access authority of the first storage information and the first storage image to be open, so that other users in the target shared article group may also obtain the first storage information and the first storage image. On the other hand, if the server finds the storage information (hereinafter referred to as storage information B and storage image (storage image B)) matching the first storage information and the first storage image in the target database, the server may determine the access rights of the storage information B and the storage image B, in the case that the access rights of the storage information B and the storage image B are open, the server may transmit the storage information B and the storage image B to the electronic device, in the case that the access rights of the storage information B and the storage image B are closed, the server may transmit third indication information to the electronic device, the third indication information indicates the electronic device to inquire whether the user of the electronic device needs to view the storage information B and the storage image B, if the user of the electronic device needs to view, that is, after the electronic device receives an instruction that the user needs to view the storage information B and the storage image B, the electronic device may transmit fourth indication information confirming viewing of the stored information B and the stored image B to the server. The server may then send fifth indication information to the first electronic device (the electronic device that sent the storage information B and the storage image B to the server) indicating that the first electronic device asks whether the user of the first electronic device would like to open access rights for the user of the electronic device. After the user of the first electronic device agrees, that is, the first electronic device receives an instruction of the user agreement, the electronic device may send the sixth indication information to the server, and then the server may send the storage information B and the storage image B to the electronic device.
It should be noted that, the storage information and the storage image that are matched with the first storage information and the first storage image and are found in the server by the server may be understood that the storage information and the storage image that are found in the server by the server are completely the same as the first storage information and the first storage image that are sent by the first electronic device. Or, the matching degree between the storage information and the storage image found in the server by the server and the first storage information and the first storage image sent by the first electronic device may be greater than or equal to the matching degree threshold.
Optionally, in this embodiment of the present invention, if the first article is a storage article (e.g., a cabinet, a box, etc.), after the user performs the fifth input on the first article, the electronic device may enlarge and display the first article, and then the user may perform another input on an article in the first article (hereinafter, referred to as a target article), trigger the electronic device to identify the first article and the target article, and obtain information of the first article and information of the target article. The electronic device may then store the first stored image and the information of the first item and the information of the target item in correspondence.
In the embodiment of the invention, because the user can trigger the electronic equipment to store the information of certain articles and the images of the article storage areas according to the actual use requirements of the user, when the user needs to search/use the articles, the user can directly input the information of the articles in the electronic equipment, and then the electronic equipment can mark the storage positions of the articles in the acquired live-action images according to the information of the articles input by the user, so that the user can accurately and quickly find the articles, and the man-machine interaction performance can be improved.
It should be noted that, in the embodiment of the present invention, a first article is taken as an example to describe a process of storing information of an article and a storage image in an electronic device, in an actual implementation, information of a plurality of articles and a storage image of a plurality of articles may be stored in the electronic device correspondingly, and for information of other articles and a storage image of other articles stored in the electronic device correspondingly, reference may be made to the description related to the first article in the above embodiment, and details are not repeated here to avoid repetition.
After the electronic device correspondingly stores the information of the plurality of items and the plurality of stored images (for example, the first stored information and the first stored image) through the above process, the electronic device may find the stored information of the item and the image of the item storage area (hereinafter, referred to as related content) through the following process two, and then determine the storage location of the item through the related content.
It should be noted that, in the embodiment of the present invention, the electronic device that executes the first process and the electronic device that executes the second process may be the same electronic device, or may be different electronic devices, which may be determined specifically according to actual usage requirements, and the embodiment of the present invention is not limited.
And a second process: the electronic device looks up information for the item.
As shown in fig. 4, corresponding to the second process, the method for searching for an item according to the second embodiment of the present invention may include the following steps S301 to S303.
S301, the electronic equipment acquires first acquisition information.
The first collected information may be information of the first article input by the user. Specifically, the first collected information may be at least one of name information, color information, shape information, and the like of the first article input by the user. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
Optionally, in the embodiment of the present invention, the user may input the information of the first item in a shooting preview interface displayed by the electronic device.
Optionally, in the embodiment of the present invention, the shooting preview interface of the electronic device may include two modes, one is an accurate mode, and the other is a scanning mode. When the electronic device is in the precision mode, after the user triggers the electronic device to display the shooting preview interface, the shooting preview interface may display a search box (e.g., 51 in fig. 5) for the user to input information of the first item; when the electronic device exits the precision mode, the electronic device may cancel displaying the search box in the shooting preview interface and may perform a normal operation of taking a picture. When the electronic equipment is in a scanning mode, after a user triggers the electronic equipment to display a shooting preview interface, the electronic equipment can automatically acquire a live-action image of the environment where the electronic equipment is located, and determine information of articles stored in a live-action area corresponding to the live-action image; when the electronic device exits the scan mode, the electronic device may perform the usual operations of taking a picture.
The embodiment of the invention can set an accurate mode control in the setting application program for controlling the electronic equipment to open or close the accurate mode, and the embodiment of the invention can set a scanning mode control in the setting application program for controlling the electronic equipment to open or close the scanning mode. Or, the embodiment of the invention can set a control of an application program, and a user controls the electronic device to switch between the precision mode and the scanning mode. Specifically, the user may trigger the electronic device to open the precision mode by inputting the control, or the user may trigger the electronic device to open the scanning mode by inputting the control again.
Optionally, in the embodiment of the present invention, the shooting preview interface may be a shooting preview interface of a camera application program, a shooting preview interface of a communication application program, a shooting preview interface of a shopping application program, and the like. The method and the device can be used according to actual use requirements, and the embodiment of the invention is not limited.
Optionally, in this embodiment of the present invention, the first collected information may be any possible form of information, such as text information, voice information, or picture information. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
The following describes an exemplary process of acquiring the first collected information by the electronic device, specifically for different forms of the first collected information.
Illustratively, as shown in fig. 5, an interface diagram of a shooting preview interface is displayed for the electronic device. Assuming that the first item is a hat, in the case where the first capture information is text information, the user may input "hat" in the search box 51, and then the electronic device may recognize the text "hat" so that information of the hat (i.e., the first capture information) may be acquired. In the case where the first collected information is voice information, the user may long press the voice control 52 and speak "hat", and then the electronic device may recognize the voice "hat", thereby acquiring information of the hat. In the case that the first collected information is the picture information, the user may click the picture control 53 and select or trigger to take a picture of the hat, and then the electronic device may identify the hat in the picture, thereby obtaining information of the hat.
It should be noted that, in the case that the electronic device is an electronic device with an AR function (that is, an AR device), after the user wears the AR, if the AR device is in the accurate mode, the AR device may collect the voice input of the user in real time, and the user does not need to press the language control.
S302, the electronic equipment collects a first collected image.
The first captured image may be a live-action image captured by the electronic device, and specifically, the first captured image may be a live-action image of an environment where the electronic device is located, captured by the electronic device. For example, assuming that the scene in which the electronic device is currently located is the home of the user, the first captured image captured by the electronic device may be a live-action image of any area in the home of the user.
Optionally, in an embodiment of the present invention, the first captured image may include an image of an open article, or the first captured image may include an image of an article in storage.
In the embodiment of the invention, after the user inputs the information of the first article, the user can trigger the electronic equipment to acquire the live-action image in real time through the camera by moving the electronic equipment. For example, still taking the current scene of the electronic device as the home of the user as an example, the user may trigger the electronic device to acquire real-scene images of each area in the home of the user in real time through the camera by moving the electronic device.
And S303, under the condition that the first acquisition information is matched with first storage information stored in the electronic equipment and the first acquisition image is matched with a first storage image stored in the electronic equipment, displaying target prompt information by the electronic equipment.
The target prompting information can be used for prompting a user that the first article is located in a real-scene area corresponding to the first collected image. Illustratively, as shown in FIG. 6, the target reminder information may be the reminder information shown at 60.
In the embodiment of the present invention, the matching between the first collected information and the first stored information may be understood as: the first collected information is the same as the first stored information, or the matching degree of the first collected information and the first stored information is greater than or equal to a first preset threshold (e.g., 95%). Matching the first captured image with the first stored image may be understood as: the first captured image is the same as the first stored image or the first captured image matches the first stored image by a degree greater than or equal to a second predetermined threshold (e.g., 95%).
In the embodiment of the invention, after the electronic device acquires the first acquisition information, the electronic device can search the storage information and the storage image which are matched with the first acquisition information and the first acquisition image from the related content stored in the specific album of the electronic device. After the electronic device finds the stored information (i.e., the first stored information) matching the first collected information and the stored image (i.e., the first stored image) matching the first collected image, the electronic device may display a target prompting message to prompt the user that the first item is located in the real-world area corresponding to the real-world image, so that the user may find the first item in the real-world area.
It should be noted that, in the embodiment of the present invention, when the first collected information is text information, the electronic device may search for text information of an article in related content stored in the electronic device. In the case that the first collected information is voice information, the electronic device may convert the voice information into text information, and then the electronic device may search for text information of an article in related content stored in a specific album of the electronic device. In a possible mode, the electronic device may identify an article in the picture by using an object identification technology, obtain text information of the article, and then the electronic device may search for the text information of the article in the related content stored in the electronic device. In another possible way, the electronic device may directly search for the picture information of the item in the related content stored in the electronic device.
Optionally, in the embodiment of the present invention, when the electronic device is an AR device, the AR device may automatically generate virtual target prompt information, so that a prompting effect may be improved, user experience may be further increased, and human-computer interaction performance may be improved.
In the item searching method provided by the embodiment of the invention, because the electronic device can correspond to the information of the stored item and the image of the storage area of the item, the electronic device can search the stored information and the stored image matched with the information and the live-action image in the electronic device according to the information of the item to be searched input by the user and the collected live-action image of the scene where the electronic device is located, and after the electronic device searches the stored information and the stored image matched with the information and the live-action image, the electronic device can show that the item to be searched is stored in the live-action area corresponding to the live-action image, so that the electronic device can display prompt information to prompt the user that the item to be searched is stored in the live-action area corresponding to the live-action image, and the user can find the item in the live-action area. Therefore, the user can trigger the electronic equipment to display the specific storage position of the object to the user by inputting the information of the object to be searched, and the specific storage position of the object does not need to be recorded and searched through the notebook application program, so that the process of searching the object can be simplified, the efficiency of searching the object is improved, and the human-computer interaction performance is improved.
Optionally, in this embodiment of the present invention, after the electronic device displays the target prompt message, the user may trigger the electronic device to display the specific storage location of the first item through an input (for example, a first input described below).
For example, after S303, the method for searching for an item according to the embodiment of the present invention may further include S304-S305 described below.
S304, the electronic equipment receives a first input.
Optionally, in the embodiment of the present invention, the first input may be an input of a target control by a user, or a voice input of the user, or any possible input such as a gesture input of the user. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
For example, the target control may be a control shown as 61 in fig. 6 (it is to be understood that the target control may be a control that triggers the electronic device to display a specific storage location of the first item, that is, a control that implements an embodiment of the present invention and displays the target identifier in the first captured image by the electronic device). The voice input may be any possible voice input such as the user saying "location specific". The gesture input may be the same gesture input as the preset gesture input (it is understood that the preset gesture input may be the gesture input for implementing the electronic device to display the target identifier in the first captured image in the embodiment of the present invention).
S305, the electronic equipment responds to the first input and displays the target identification in the first acquired image.
Wherein the target identifier may be used to indicate a specific storage location of the first item in the real world area.
In an embodiment of the present invention, after the electronic device receives a first input from the user, the electronic device may display the target identifier in the first captured image in response to the first input. Specifically, after the electronic device receives a first input of the user, the electronic device may determine a position of the image of the first item in the first storage image, and then the electronic device may display a target identifier at a corresponding position in the first captured image according to the position of the first item in the first storage image, so as to prompt the user of a specific storage position of the first item in the real-world area.
Illustratively, assuming that the first item is a comb, as shown in FIG. 6, the indicator 62 may indicate that the comb is stored in the cabinet 63.
In the embodiment of the invention, as the user can trigger the electronic equipment to display the identifier which can indicate the specific storage position of the first article on the first storage image through one input, the user can quickly find the first article in the corresponding real scene area by combining the first storage image and the identifier, thereby improving the efficiency of finding the article and the man-machine interaction performance.
Optionally, in the embodiment of the present invention, if the user needs to check the items stored in a certain live-action area, the user may trigger the electronic device to be in the scanning mode. Then, the user can trigger the electronic equipment to acquire the image of the real scene area, and after the electronic equipment acquires the real scene image, the electronic equipment can acquire the information of the articles stored in the real scene area according to the real scene image and can display the information of the articles.
For example, after S303, the method for searching for an item according to the embodiment of the present invention may further include S306 to S308 described below.
S306, the electronic equipment acquires a second acquired image.
The second captured image may be a live-action image captured by the electronic device. Specifically, the second captured image may be a live-action image of an environment where the electronic device is located, captured by the electronic device. For other descriptions of the second captured image, reference may be specifically made to the description related to the first captured image in the foregoing embodiment, and details are not repeated here to avoid repetition.
And S307, under the condition that the second collected image is matched with a second storage image stored in the electronic equipment, the electronic equipment acquires at least one piece of second storage information which is stored corresponding to the second storage image.
The at least one second storage information may be information of an article stored in a live-action area corresponding to the second captured image. Specifically, each of the at least one second storage information may include at least one of: the picture information of the article stored in the live-action area corresponding to the second collected image and the text information of the article stored in the live-action area corresponding to the second collected image.
Optionally, in this embodiment of the present invention, the types of the at least one second storage information may be all the same (for example, all the second storage information may be picture information, or all the second storage information may be text information), or may be partially the same (for example, part of the second storage information may be picture information, and another part of the second storage information is text information).
Optionally, in an embodiment of the present invention, one or more articles stored in the live-action area corresponding to the second captured image may be used. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
In the embodiment of the invention, after the electronic equipment acquires the second acquired image, the electronic equipment can search the image matched with the second acquired image from the specific photo album of the electronic equipment. After the electronic device finds an image (i.e., the second stored image) that matches the second captured image, the electronic device may retrieve at least one second stored information that is stored in correspondence with the second stored image.
S308, the electronic equipment displays at least one piece of second storage information.
In the embodiment of the present invention, after the electronic device acquires the at least one piece of second storage information stored in correspondence with the second storage image, the electronic device may display the at least one piece of second storage information, which is used to show to the user what items are stored in the live-action area corresponding to the second captured image.
Optionally, in this embodiment of the present invention, the electronic device may display the at least one piece of second storage information in a list form.
The above-described S306-S308 are exemplarily described below with reference to fig. 7.
For example, after the electronic device acquires the second captured image, the electronic device may display the second captured image as shown in (a) of fig. 7. Then, the electronic device may search for an image matching the second captured image from the electronic device, after the electronic device finds a second stored image matching the second captured image, the electronic device may obtain at least one piece of second stored information stored corresponding to the second stored image, and after the electronic device obtains the at least one piece of second stored information stored corresponding to the second stored image, as shown in (b) of fig. 7, the electronic device may display the at least one piece of second stored information (as shown in 70 of fig. 7). Wherein the at least one second storage information may include: stored information 71 (picture information and text information of facial cleanser), stored information 72 (picture information and text information of hat), stored information 73 (picture information and text information of comb), and stored information 74 (picture information and text information of makeup brush).
It should be noted that, in the embodiment of the present invention, the electronic device may execute the above S306-S308 at any time. Specifically, the electronic device may execute the above S306-S308 based on the embodiment provided in S301-S303 shown in fig. 4, or the electronic device may also execute the above S306-S308 alone, or the electronic device may also execute the above S306-S308 based on any embodiment described below.
In the embodiment of the invention, the electronic equipment can acquire the information of the articles stored in the live-action area corresponding to the live-action image according to the live-action image acquired by the electronic equipment and display the information of the articles to the user, so that the user can quickly determine the articles stored in the current live-action area (such as a cabinet) according to the information of the articles, the articles can be conveniently checked or used by the user, the process of searching the articles can be further shortened, the efficiency of searching the articles is improved, and the human-computer interaction performance is improved.
Optionally, in the embodiment of the present invention, after the user moves a certain article stored in the live-action area corresponding to the second captured image from the live-action area to another live-action area, the user may trigger the electronic device to correspondingly store the information of the article and the image corresponding to the another live-action area (i.e., reestablish the association relationship between the information of the article and the image), so that the user may know the storage location of the article accurately and in real time, and further, the user may find the article quickly, and the human-computer interaction performance is improved.
The following describes an exemplary method for the electronic device to reestablish the association relationship between the information of the item and the image, through two possible implementation manners.
First possible implementation
In a first possible implementation manner, after the user moves a certain article stored in the live-action area corresponding to the second captured image from the live-action area to another live-action area, the user may trigger the electronic device to display an image stored in a specific album of the electronic device through input of information of the article, and drag the information of the article to a stored image corresponding to the another live-action area, so that the electronic device may store the information of the article and the stored image corresponding to the another live-action area correspondingly, that is, the electronic device may establish an association relationship between the information of the article and the stored image corresponding to the another live-action area.
For example, after S308, the method for searching for an item provided in the embodiment of the present invention may further include S309-S3012 described below.
S309, the electronic equipment receives a second input.
The second input may be an input of a third stored information of the at least one second stored information by the user.
Optionally, in this embodiment of the present invention, the second input may specifically be a long-press input of the third stored information by the user, or a re-press input of the third stored information by the user, and the like. The method can be determined according to actual use requirements, and the embodiment of the invention is not limited.
S3010, the electronic device responds to a second input and displays at least one stored image stored in the electronic device.
In the embodiment of the present invention, after the electronic device receives the second input of the user, the electronic device may acquire at least one stored image stored in the electronic device in response to the second input, and then the electronic device may redisplay the stored images.
Specifically, the electronic device may retrieve at least one stored image stored in a particular album of the electronic device in response to the second input. For the description of the specific album, reference may be specifically made to the related description in the above-mentioned process one, and details are not described here again to avoid repetition.
S3011, the electronic device receives a third input.
The third input may be an input of a third storage image of the at least one storage image by the user.
Optionally, in an embodiment of the present invention, the third input may specifically be an input that a user drags the third storage information to the third storage image.
S3012, the electronic device responds to a third input and correspondingly stores third storage information and a third storage image.
In the embodiment of the present invention, after the electronic device receives a third input from the user, the electronic device may, in response to the third input, acquire third storage information and a third storage image, and store the third storage information and the third storage image correspondingly, that is, establish an association relationship between the third storage information and the third storage image.
The following describes the above S309-S3012 exemplarily with reference to fig. 7 and 8.
For example, assuming that the user moves the hat stored in the live-action area corresponding to the second captured image shown in fig. 7 from the live-action area to the live-action area corresponding to the stored image shown in 83 in fig. 8, the user may perform the second input on the stored information 72 (the stored information 72 is information of the hat), as shown in (a) in fig. 8, and the electronic device may display at least one stored image stored in the electronic device. Wherein the at least one stored image comprises: a stored image 80, a stored image 81, a stored image 82, and a stored image 83. After the user performs the third input on the storage image 83, that is, the user drags the storage information 72 onto the storage image 83, as shown in (b) of fig. 8, the electronic device may acquire the storage information 72 and the storage image 83, and store the storage information 72 and the storage image 83 correspondingly, that is, establish an association relationship between the storage information 72 and the storage image 83.
Optionally, in this embodiment of the present invention, the third input may be completed through two sub-inputs, namely, the first sub-input and the second sub-input. Specifically, in a case where the third input includes the first sub-input and the second sub-input, and the third storage information includes an image of the second item, the S3012 may be specifically implemented by the following S3012a-S3012 d.
S3012a, the electronic device receives a first sub-input.
Optionally, in this embodiment of the present invention, the first sub-input may be an input that the user drags the image of the second item to the third stored image.
S3012b, the electronic device displays an image of the second item on the third stored image in response to the first sub-input.
Optionally, in this embodiment of the present invention, in response to the first sub-input, the electronic device may first display the third stored image in an enlarged manner, and then may display the image of the second item on the third stored image.
In the embodiment of the present invention, after the electronic device receives the first sub-input of the user, the electronic device may enlarge and display the third stored image, and display the image of the second item on the third stored image.
S3012c, the electronic device receives a second sub-input.
Optionally, in this embodiment of the present invention, the second sub-input may be a drag input of the user to the image of the second item. The second sub-input may be used to adjust a display position of the image of the second item on the third stored image. For example, assuming that the image of the second item is a hat image as shown in 721 of fig. 7, and the third stored image is a rack image as shown in 83 of fig. 8, before the user performs the first sub-input, the hat image is located in the live-action region corresponding to the second captured image shown in fig. 7, if the user desires to move the hat image from the region to the live-action region corresponding to the rack image as shown in 83, after the user performs the first sub-input, the electronic device may display the hat image beside the rack image, and then the user may adjust the hat image from the side of the rack image to the bottom layer of the rack image through the second sub-input.
S3012d, the electronic device responds to the second sub-input, synthesizes the third storage image and the image of the second object to obtain a fourth storage image, and correspondingly stores the third storage information and the fourth storage image.
In the embodiment of the present invention, after the electronic device receives the second sub-input of the user, the electronic device may synthesize the third stored image and the image of the second item in response to the second sub-input to obtain the fourth stored image. Specifically, the electronic device may synthesize the third stored image and the second object image by means of screen capture, so as to obtain a fourth stored image. It is understood that the display position of the image of the second item in the fourth stored image may be the display position adjusted by the user through the second sub-input.
Further, the electronic device may store the third storage information and the fourth storage image correspondingly. That is, after the user moves an item in one real-scene area from the real-scene area to another real-scene area, the user may trigger the electronic device to re-establish the association between the information of the item and the image of the other real-scene area, so that the association between the information of the item and the image may be updated in real time according to the actual storage location of the item.
The above-mentioned S3012a-S3012d will be exemplarily described with reference to fig. 7, 8 and 9.
Illustratively, in conjunction with FIG. 7 above, assume that the image of the second item is a hat image, indicated at 721. As shown in fig. 9 (a), when the user drags the image 721 to the rack image 83, the electronic apparatus may display the hat image 721 on the rack image 83 as shown in fig. 9 (b). As shown in (c) of fig. 9, after the user moves the hat image 721 from the position shown by 90 to the position shown by 91, the electronic device may combine the hat image 721 and the storage-shelf image 83 by means of screen-capturing to obtain a fourth stored image (shown as 92 in fig. 9), and store the information 72 of the hat and the fourth stored image 92 correspondingly.
In the embodiment of the invention, after a user moves an article from one live-action area to another live-action area, the user can trigger the electronic equipment to correspondingly store the information of the article and the live-action image corresponding to the other live-action area, namely, the association relation between the information of the article and the live-action image is established, so that the user can know the storage position of the article accurately and in real time, the user can find the article quickly, and the man-machine interaction performance can be improved.
Further, after the user moves an article from one real-scene area to a certain position in another real-scene area, the user can trigger the electronic device to adjust the position of the article in the image of the other real-scene area through an input, that is, the image of the article and the position of the article in the area correspond to each other, so that the electronic device can more accurately indicate the actual position of the article, the user can conveniently and accurately find the article, and the human-computer interaction performance is improved.
Optionally, in the embodiment of the present invention, after the user moves the second item to the live-action area corresponding to the third storage image, the electronic device may be triggered to delete the image of the second item from the second storage image, that is, to release the association relationship between the second storage image and the information of the second item.
For example, after S3012d, the method for finding an item according to the embodiment of the present invention may further include S3013 described below.
S3013, the electronic device deletes the image of the second object from the second storage image.
In the embodiment of the present invention, after the electronic device stores the third storage information and the fourth storage image in correspondence, the electronic device may further delete the image of the second item from the second storage image. Specifically, the electronic device may delete the image of the second item from the second stored image through an image processing technique (e.g., a matting technique), that is, the electronic device may release the association relationship of the information of the second item in the second stored image.
In the embodiment of the invention, after the electronic equipment establishes the incidence relation between the third storage information and the fourth storage image, the incidence relation between the third storage information and the second storage image can be automatically released, that is, the electronic equipment can be ensured to update the incidence relation between the information and the image of the article in real time, so that the electronic equipment can more accurately indicate the storage position of the article, a user can conveniently and accurately find the article, and the man-machine interaction performance is improved.
Second possible implementation
In a second possible implementation manner, after the user moves a certain item stored in the live-action area corresponding to the second captured image from the live-action area to another live-action area, the user may trigger the electronic device to capture a live-action image of the another live-action area, and then the electronic device may search a specific album for a storage image (hereinafter referred to as storage image 1) matching the live-action image and determine an object in the live-action image, which is different from the object in the storage image 1, so that the electronic device may store information of the different object and the live-action image correspondingly (i.e., establish an association relationship between information of the different object and the live-action image) and delete an image of the different object from the storage image 1 (i.e., release the information association relationship between the storage image 1 and the different object).
For example, after S303, the method for searching for an article according to the embodiment of the present invention may further include S3014 to S3016.
S3014, the electronic device obtains a third collected image.
The third captured image may be a live-action image captured by the electronic device. Specifically, the third captured image may be a live-action image of an environment where the electronic device is located, captured by the electronic device. For other descriptions of the third captured image, reference may be specifically made to the description related to the first captured image in the foregoing embodiment, and details are not repeated here to avoid repetition.
S3015, the electronic device searches for a storage image matched with the third collected image in the electronic device.
In the embodiment of the present invention, after the electronic device acquires the third captured image, the electronic device may search for a stored image matching the third captured image in the electronic device. Specifically, if the electronic device finds an image (for example, a fifth stored image described below) matching the third captured image in the electronic device, that is, if the third captured image matches the fifth stored image stored in the electronic device, the electronic device may perform S3016 described below. If the electronic device does not find an image matching the third captured image in the electronic device, that is, the third captured image does not match any one of the stored images stored in the electronic device, the electronic device may perform S3017 described below.
S3016, the electronic device determines the target object in the third acquired image and correspondingly stores information of the target object and the third acquired image.
Wherein the target object may be an object in the third captured image other than the object in the fifth stored image.
Optionally, in this embodiment of the present invention, the information of the target object may include at least one of the following: picture information of the target object, text information of the target object.
In the embodiment of the present invention, when the electronic device finds an image (a fifth stored image) matching the third captured image, that is, when the third captured image matches the fifth stored image stored in the electronic device, the electronic device may determine the target object in the third captured image. Specifically, the electronic device may identify an object in the third captured image and an object in the fifth stored image, respectively, and then the electronic device may compare the object in the third captured image and the object in the fifth stored image in sequence, so as to determine an object (i.e., a target object) in the third captured image except the object in the fifth stored image. Further, after the electronic device determines the objects in the third captured image, except the object in the fifth stored image, the electronic device may store the information of the target objects and the third captured image.
Still further, after the electronic device correspondingly stores the information of the target objects and the third captured image, the electronic device may delete the image of the target objects from the fifth stored image, that is, the electronic device may release the association relationship between the fifth stored image and the information of the target objects.
In the embodiment of the invention, the electronic equipment releases the incidence relation between the fifth storage image and the information of the target object, so that the electronic equipment can update the incidence relation between the information of the object and the image according to the actual storage position of the object, the electronic equipment can more accurately indicate the storage position of the object, a user can search the object quickly and accurately, and the man-machine interaction performance is improved.
S3017, the electronic device displays the third captured image, and responds to a fourth input of the user to the first object in the third captured image, and correspondingly stores information of the first object and the third captured image.
Optionally, in this embodiment of the present invention, the information of the first object may include at least one of the following: picture information of the first object, text information of the first object.
In the embodiment of the present invention, when the electronic device does not find the stored image matched with the third captured image, that is, the third captured image is not matched with any stored image stored in the electronic device, the electronic device may display the third captured image. The user may then perform a fourth input on the first object in the third captured image. After the electronic device receives a fourth input by the user, the electronic device may correspondingly store the information of the first object and the third captured image in response to the fourth input. Specifically, the electronic device may acquire an image of the first object from the third captured image, recognize the image of the first object, and obtain information of the first object, and then the electronic device may correspondingly store the information of the first object and the third captured image.
In the embodiment of the present invention, the electronic device may determine, according to the collected live-action image, whether there is an added article in the live-action area corresponding to the live-action image, and in a case where it is determined that there is an added article in the live-action area, the electronic device may establish an association relationship between the added article and the collected image, and the electronic device may further determine whether there is an association relationship between the added article and another stored image stored in a specific album of the electronic device. Under the condition that the electronic equipment determines that the newly added article is related to a certain storage image stored in a specific photo album of the electronic equipment, the electronic equipment can release the related relationship between the newly added article and the storage image, so that the electronic equipment can update the related relationship between the article and the image in real time according to the actual position of the article, the electronic equipment can accurately indicate the actual position of the article, a user can conveniently and accurately find the article, and the man-machine interaction performance is improved.
It should be noted that, in the embodiment of the present invention, the article searching method shown in each method drawing is exemplarily described by combining one drawing in the embodiment of the present invention. In specific implementation, the article searching method shown in each method drawing may also be implemented by combining any other drawing which may be combined and is illustrated in the foregoing embodiments, and details are not described here again.
As shown in fig. 10, an embodiment of the present invention provides an electronic device 400, where the electronic device 400 includes an obtaining module 401, an acquiring module 402, and a processing module 403. The obtaining module 401 may be configured to obtain first collected information, where the first collected information is information of a first article input by a user; an acquisition module 402, which may be configured to acquire a first acquired image, where the first acquired image is an acquired live-action image; the processing module 403 may be configured to display target prompt information when the first acquisition information acquired by the acquisition module 401 matches with first storage information stored in the electronic device and the first acquisition image acquired by the acquisition module 402 matches with a first storage image stored in the electronic device, where the target prompt information may be used to prompt a user that the first item is located in a live-action area corresponding to the first acquisition image, the first storage information and the first storage image may be stored in the electronic device correspondingly, the first storage information is information of the first item, and the first storage image is an image of a storage area of the first item.
Optionally, in this embodiment of the present invention, the processing module 403 may be further configured to, after displaying the target prompt message, in response to the first input, display a target identifier in the first captured image, where the target identifier may be used to indicate a storage location of the first item in the live-action area.
Optionally, in the embodiment of the present invention, the obtaining module 401 may be further configured to obtain a second collected image; under the condition that the second collected image is matched with a second storage image stored in the electronic equipment, at least one piece of second storage information which is stored correspondingly to the second storage image is obtained; the second collected image may be a collected live-action image, and the at least one second storage information may be information of an article stored in a live-action area corresponding to the second collected image; the processing module 403 may be further configured to display the at least one second storage information acquired by the acquiring module 401.
Optionally, in this embodiment of the present invention, the processing module 403 may be further configured to, after displaying the at least one piece of second storage information, respond to a second input, and display at least one storage image stored in the electronic device; and in response to a third input, correspondingly storing third storage information and a third storage image. Wherein the second input may be an input to a third stored information of the at least one second stored information, and the third input may be an input to a third stored image of the at least one stored image.
Optionally, in an embodiment of the present invention, in a case that the third input includes a first sub input and a second sub input, and the third storage information includes an image of the second item, the processing module 403 may be specifically configured to, in response to the first sub input, enlarge and display the third storage image, and display the image of the second item on the third storage image; and responding to the second sub-input, synthesizing the third storage image and the image of the second object to obtain a fourth storage image, and correspondingly storing the third storage information and the fourth storage image.
Optionally, in this embodiment of the present invention, the processing module 403 may be further configured to delete the image of the second object from the second stored image after the third stored image and the image of the second object are combined.
Optionally, in the embodiment of the present invention, the obtaining module 401 may further be configured to obtain a third collected image, where the third collected image may be a collected live-action image; the processing module 403 may be further configured to, when the third captured image acquired by the acquiring module 401 matches a fifth stored image stored in the electronic device, determine a target object in the third captured image, and store information of the target object and the third captured image correspondingly, where the target object is an object in the third captured image except for an object in the fifth stored image; or, in the case that the third captured image does not match any of the stored images stored in the electronic device, displaying the third captured image, and in response to a fourth input of the first object in the third captured image by the user, correspondingly storing information of the first object and the third captured image.
Optionally, in an embodiment of the present invention, the acquiring module 402 may be further configured to acquire a first storage image before acquiring the first acquisition information; the processing module 403 may be further configured to identify the image of the first item in response to a fifth input of the user to the image of the first item in the first storage image acquired by the acquiring module 402, so as to obtain first storage information; and correspondingly storing the first storage information and the first storage image.
The electronic device provided by the embodiment of the present invention can implement each process implemented by the electronic device in the above method embodiments, and is not described herein again to avoid repetition.
An embodiment of the present invention provides an electronic device, where after the electronic device acquires first acquisition information (information of a first article input by a user) and acquires a first acquisition image (an acquired live-action image), if the first acquisition information matches first storage information (information of the first article) stored in the electronic device and the first acquisition image matches a first storage image (an image of a storage area of the first article) stored in the electronic device, the electronic device may display target prompt information for prompting the user that the first article is located in the live-action area corresponding to the first acquisition image. The first storage information and the first storage image are correspondingly stored in the electronic equipment. According to the scheme, the electronic equipment can correspondingly store the information of a certain article and the image of the storage area of the article, so that the electronic equipment can search the storage information and the storage image matched with the information and the live-action image in the electronic equipment according to the information of the article to be searched input by a user and the acquired live-action image of the scene where the electronic equipment is located, and after the electronic equipment searches the storage information and the storage image matched with the information and the live-action image, the article to be searched can be shown to be stored in the live-action area corresponding to the live-action image, so that the electronic equipment can display prompt information to prompt the user that the article to be searched is stored in the live-action area corresponding to the live-action image, and the user can find the article in the live-action area. Therefore, the user can trigger the electronic equipment to display the specific storage position of the object to the user by inputting the information of the object to be searched, and the specific storage position of the object does not need to be recorded and searched through the notebook application program, so that the process of searching the object can be simplified, the efficiency of searching the object is improved, and the human-computer interaction performance is improved.
Fig. 11 is a hardware schematic diagram of an electronic device implementing various embodiments of the invention. As shown in fig. 11, the electronic device 100 includes but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111, camera 112, and the like. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 11 does not constitute a limitation of electronic devices, which may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The processor 110 may be configured to control the user input unit 104 to obtain the first acquisition information; and controls the camera 112 to collect a first collected image; and controlling the display unit 106 to display the target prompt information under the condition that the first acquisition information is matched with the first storage information stored in the electronic device and the first acquisition image is matched with the first storage image stored in the electronic device. The first acquisition information is information of a first article input by a user, the first acquisition image is an acquired live-action image, and the target prompt information is used for prompting the user that the first article is located in a live-action area corresponding to the first acquisition image; the first storage information and the first storage image are correspondingly stored in the electronic equipment; the first storage information is information of a first article, and the first storage image is an image of a storage area of the first article.
It can be understood that, in the embodiment of the present invention, a schematic structural diagram of the electronic device (for example, the obtaining module 401 in fig. 10 may be implemented by the input unit 104, a schematic structural diagram of the electronic device (for example, the collecting module 402 in fig. 10 may be implemented by the camera 112, and a processing module 403 in the schematic structural diagram of the electronic device (for example, fig. 10) may be implemented by the display unit 106.
In addition, in the embodiment of the present invention, in the case that the processing module 403 is used for storing the storage information and the storage image correspondingly (for example, storing the third storage information and the fourth storage image correspondingly), the processing module 403 may be implemented by the memory 109.
The embodiment of the invention provides electronic equipment, which is characterized in that after the electronic equipment acquires first acquisition information (information of a first article input by a user) and acquires a first acquisition image (an acquired live-action image); if the first collected information matches with first storage information (information of the first item) stored in the electronic device, and the first collected image matches with a first storage image (image of a storage area of the first item) stored in the electronic device, the electronic device may display target prompt information for prompting a user that the first item is located in a live-action area corresponding to the first collected image. The first storage information and the first storage image are correspondingly stored in the electronic equipment. According to the scheme, the electronic equipment can correspondingly store the information of a certain article and the image of the storage area of the article, so that the electronic equipment can search the storage information and the storage image matched with the information and the live-action image in the electronic equipment according to the information of the article to be searched input by a user and the acquired live-action image of the scene where the electronic equipment is located, and after the electronic equipment searches the storage information and the storage image matched with the information and the live-action image, the article to be searched can be shown to be stored in the live-action area corresponding to the live-action image, so that the electronic equipment can display prompt information to prompt the user that the article to be searched is stored in the live-action area corresponding to the live-action image, and the user can find the article in the live-action area. Therefore, the user can trigger the electronic equipment to display the specific storage position of the object to the user by inputting the information of the object to be searched, and the specific storage position of the object does not need to be recorded and searched through the notebook application program, so that the process of searching the object can be simplified, the efficiency of searching the object is improved, and the human-computer interaction performance is improved.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 101 may be used for receiving and sending signals during a message transmission or call process, and specifically, after receiving downlink data from a base station, the downlink data is processed by the processor 110; in addition, the uplink data is transmitted to the base station. Typically, radio frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio frequency unit 101 can also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 102, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the network module 102 or stored in the memory 109 into an audio signal and output as sound. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the electronic apparatus 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 includes a speaker, a buzzer, a receiver, and the like.
The input unit 104 is used to receive an audio or video signal. The input unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of a still picture or video obtained by an image capturing device (e.g., the camera 112) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the network module 102. The microphone 1042 may receive sound and may be capable of processing such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode.
The electronic device 100 also includes at least one sensor 105, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or the backlight when the electronic device 100 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensors 105 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 106 is used to display information input by a user or information provided to the user. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 107 includes a touch panel 1071 and other input devices 1072. Touch panel 1071, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 1071 (e.g., operations by a user on or near touch panel 1071 using a finger, stylus, or any suitable object or attachment). The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and receives and executes commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. Specifically, other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 1071 may be overlaid on the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although in fig. 10, the touch panel 1071 and the display panel 1061 are two independent components to implement the input and output functions of the electronic device, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 108 is an interface for connecting an external device to the electronic apparatus 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 100 or may be used to transmit data between the electronic apparatus 100 and the external device.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the electronic device. Processor 110 may include one or more processing units; alternatively, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The electronic device 100 may further include a power supply 111 (e.g., a battery) for supplying power to each component, and optionally, the power supply 111 may be logically connected to the processor 110 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the electronic device 100 includes some functional modules that are not shown, and are not described in detail herein.
Optionally, an embodiment of the present invention further provides an electronic device, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program, when executed by the processor, implements the processes of the foregoing method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
Optionally, an embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the foregoing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may include a read-only memory (ROM), a Random Access Memory (RAM), a magnetic or optical disk, and the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling an electronic device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (15)

1. An article searching method applied to electronic equipment is characterized by comprising the following steps:
acquiring first acquisition information, wherein the first acquisition information is information of a first article input by a user;
acquiring a first acquired image, wherein the first acquired image is an acquired live-action image;
displaying target prompt information under the condition that the first acquisition information is matched with first storage information stored in the electronic equipment and the first acquisition image is matched with a first storage image stored in the electronic equipment, wherein the target prompt information is used for prompting a user that the first article is located in a real-scene area corresponding to the first acquisition image;
the first storage information and the first storage image are correspondingly stored in the electronic equipment, the first storage information is information of the first article, and the first storage image is an image of a storage area of the first article.
2. The method of claim 1, wherein after displaying the target reminder information, the method further comprises:
in response to a first input, displaying a target identifier in the first captured image, the target identifier indicating a storage location of the first item in the real world area.
3. The method according to claim 1 or 2, characterized in that the method further comprises:
acquiring a second acquired image, wherein the second acquired image is an acquired live-action image;
under the condition that the second collected image is matched with a second storage image stored in the electronic equipment, at least one piece of second storage information which is stored corresponding to the second storage image is acquired, wherein the at least one piece of second storage information is information of an article stored in a real scene area corresponding to the second collected image;
displaying the at least one second stored information.
4. The method of claim 3, wherein after displaying the at least one second stored information, the method further comprises:
displaying at least one stored image stored in the electronic device in response to a second input, the second input being an input to a third stored information of the at least one second stored information;
and responding to a third input, wherein the third storage information and a third storage image are correspondingly stored, and the third input is input of the third storage image in the at least one storage image.
5. The method of claim 4, wherein the third input comprises a first sub-input and a second sub-input, and the third stored information comprises an image of a second item;
the corresponding storage of the third storage information with a third storage image in response to a third input includes:
displaying an image of the second item on the third stored image in response to the first sub-input;
and responding to the second sub-input, synthesizing the third storage image and the image of the second object to obtain a fourth storage image, and correspondingly storing the third storage information and the fourth storage image.
6. The method of claim 5, wherein after compositing the third stored image and the image of the second item, the method further comprises:
deleting the image of the second item from the second stored image.
7. The method of claim 1, further comprising:
acquiring a third acquired image, wherein the third acquired image is an acquired live-action image;
under the condition that the third acquired image is matched with a fifth stored image stored in the electronic equipment, determining a target object in the third acquired image, and correspondingly storing the information of the target object and the third acquired image, wherein the target object is an object in the third acquired image except for the object in the fifth stored image;
alternatively, the first and second electrodes may be,
and displaying the third acquired image under the condition that the third acquired image is not matched with any one of the stored images stored in the electronic equipment, and responding to a fourth input of a user to a first object in the third acquired image, and correspondingly storing the information of the first object and the third acquired image.
8. An electronic device is characterized by comprising an acquisition module, an acquisition module and a processing module;
the acquisition module is used for acquiring first acquisition information, wherein the first acquisition information is information of a first article input by a user;
the acquisition module is used for acquiring a first acquired image, and the first acquired image is an acquired live-action image;
the processing module is configured to display target prompt information under the condition that the first acquisition information acquired by the acquisition module matches first storage information stored in the electronic device and the first acquisition image acquired by the acquisition module matches a first storage image stored in the electronic device, where the target prompt information is used to prompt a user that the first article is located in a live-action area corresponding to the first acquisition image;
the first storage information and the first storage image are correspondingly stored in the electronic equipment, the first storage information is information of the first article, and the first storage image is an image of a storage area of the first article.
9. The electronic device of claim 8,
the processing module is further configured to display a target identifier in the first captured image in response to a first input after displaying the target prompt message, the target identifier being used to indicate a storage location of the first item in the real-world area.
10. The electronic device of claim 8 or 9,
the acquisition module is also used for acquiring a second acquisition image; under the condition that the second collected image is matched with a second storage image stored in the electronic equipment, at least one piece of second storage information which is stored correspondingly to the second storage image is obtained; the second collected image is a collected live-action image, and the at least one piece of second storage information is information of articles stored in a live-action area corresponding to the second collected image;
the processing module is further configured to display the at least one second storage information acquired by the acquiring module.
11. The electronic device of claim 10,
the processing module is further used for responding to a second input after the at least one second storage information is displayed, and displaying at least one storage image stored in the electronic equipment; responding to a third input, and correspondingly storing third storage information and a third storage image;
wherein the second input is an input to the third one of the at least one second stored information, and the third input is an input to the third one of the at least one stored image.
12. The electronic device of claim 11, wherein the third input comprises a first sub-input and a second sub-input, and wherein the third stored information comprises an image of a second item;
the processing module is specifically configured to display an image of the second item on the third stored image in response to the first sub-input; and responding to the second sub-input, synthesizing the third storage image and the image of the second object to obtain a fourth storage image, and correspondingly storing the third storage information and the fourth storage image.
13. The electronic device of claim 12,
the processing module is further configured to delete the image of the second item from the second stored image after the third stored image and the image of the second item are synthesized.
14. The electronic device of claim 8,
the acquisition module is further used for acquiring a third acquired image, wherein the third acquired image is an acquired live-action image;
the processing module is further configured to determine a target object in the third captured image and correspondingly store information of the target object and the third captured image when the third captured image acquired by the acquisition module matches a fifth stored image stored in the electronic device, where the target object is an object in the third captured image except for an object in the fifth stored image; or, in a case that the third captured image does not match any stored image stored in the electronic device, displaying the third captured image, and in response to a fourth input of a first object in the third captured image by a user, correspondingly storing information of the first object and the third captured image.
15. An electronic device, comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the item lookup method according to any one of claims 1 to 7.
CN201911357340.9A 2019-12-25 2019-12-25 Article searching method and electronic equipment Pending CN111046211A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911357340.9A CN111046211A (en) 2019-12-25 2019-12-25 Article searching method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911357340.9A CN111046211A (en) 2019-12-25 2019-12-25 Article searching method and electronic equipment

Publications (1)

Publication Number Publication Date
CN111046211A true CN111046211A (en) 2020-04-21

Family

ID=70239665

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911357340.9A Pending CN111046211A (en) 2019-12-25 2019-12-25 Article searching method and electronic equipment

Country Status (1)

Country Link
CN (1) CN111046211A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111625166A (en) * 2020-05-21 2020-09-04 维沃移动通信有限公司 Picture display method and device
CN112052784A (en) * 2020-09-02 2020-12-08 腾讯科技(深圳)有限公司 Article searching method, device, equipment and computer readable storage medium
CN112562284A (en) * 2020-12-04 2021-03-26 拉扎斯网络科技(上海)有限公司 Cabinet lattice door opening prompting method and device of intelligent cabinet and electronic equipment
CN113269828A (en) * 2021-04-25 2021-08-17 青岛海尔空调器有限总公司 Article searching method and device, air conditioning equipment and storage medium
CN112052784B (en) * 2020-09-02 2024-04-19 腾讯科技(深圳)有限公司 Method, device, equipment and computer readable storage medium for searching articles

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107084736A (en) * 2017-04-27 2017-08-22 维沃移动通信有限公司 A kind of air navigation aid and mobile terminal
CN108592939A (en) * 2018-07-11 2018-09-28 维沃移动通信有限公司 A kind of air navigation aid and terminal
CN109961074A (en) * 2017-12-22 2019-07-02 深圳市优必选科技有限公司 A kind of method, robot and computer readable storage medium for searching article

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107084736A (en) * 2017-04-27 2017-08-22 维沃移动通信有限公司 A kind of air navigation aid and mobile terminal
CN109961074A (en) * 2017-12-22 2019-07-02 深圳市优必选科技有限公司 A kind of method, robot and computer readable storage medium for searching article
CN108592939A (en) * 2018-07-11 2018-09-28 维沃移动通信有限公司 A kind of air navigation aid and terminal

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111625166A (en) * 2020-05-21 2020-09-04 维沃移动通信有限公司 Picture display method and device
CN111625166B (en) * 2020-05-21 2021-11-30 维沃移动通信有限公司 Picture display method and device
CN112052784A (en) * 2020-09-02 2020-12-08 腾讯科技(深圳)有限公司 Article searching method, device, equipment and computer readable storage medium
CN112052784B (en) * 2020-09-02 2024-04-19 腾讯科技(深圳)有限公司 Method, device, equipment and computer readable storage medium for searching articles
CN112562284A (en) * 2020-12-04 2021-03-26 拉扎斯网络科技(上海)有限公司 Cabinet lattice door opening prompting method and device of intelligent cabinet and electronic equipment
CN113269828A (en) * 2021-04-25 2021-08-17 青岛海尔空调器有限总公司 Article searching method and device, air conditioning equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111107222B (en) Interface sharing method and electronic equipment
CN110891144B (en) Image display method and electronic equipment
CN110913132B (en) Object tracking method and electronic equipment
CN109543099B (en) Content recommendation method and terminal equipment
CN110752981B (en) Information control method and electronic equipment
CN110908557B (en) Information display method and terminal equipment
CN107707762A (en) A kind of method for operating application program and mobile terminal
CN110069188B (en) Identification display method and terminal equipment
CN111093034B (en) Article searching method and electronic equipment
CN110703972B (en) File control method and electronic equipment
CN110866038A (en) Information recommendation method and terminal equipment
CN111046211A (en) Article searching method and electronic equipment
CN111124223A (en) Application interface switching method and electronic equipment
CN111142724A (en) Display control method and electronic equipment
CN108874906B (en) Information recommendation method and terminal
CN111124231B (en) Picture generation method and electronic equipment
CN110753155A (en) Proximity detection method and terminal equipment
CN110944113B (en) Object display method and electronic equipment
CN110209324B (en) Display method and terminal equipment
CN111143596A (en) Article searching method and electronic equipment
CN111698550A (en) Information display method and device, electronic equipment and medium
CN109669710B (en) Note processing method and terminal
CN108762641B (en) Text editing method and terminal equipment
CN111190515A (en) Shortcut panel operation method, device and readable storage medium
CN109067975B (en) Contact person information management method and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination