CN104317815A - Wearable device capable of finding objects and object finding method - Google Patents

Wearable device capable of finding objects and object finding method Download PDF

Info

Publication number
CN104317815A
CN104317815A CN201410503029.1A CN201410503029A CN104317815A CN 104317815 A CN104317815 A CN 104317815A CN 201410503029 A CN201410503029 A CN 201410503029A CN 104317815 A CN104317815 A CN 104317815A
Authority
CN
China
Prior art keywords
image
article
future reference
wearable device
trigger pip
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410503029.1A
Other languages
Chinese (zh)
Inventor
徐晓燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inventec Pudong Technology Corp
Inventec Corp
Original Assignee
Inventec Pudong Technology Corp
Inventec Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inventec Pudong Technology Corp, Inventec Corp filed Critical Inventec Pudong Technology Corp
Priority to CN201410503029.1A priority Critical patent/CN104317815A/en
Publication of CN104317815A publication Critical patent/CN104317815A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data

Abstract

The invention relates to a wearable device capable of finding objects and an object finding method. The object finding method comprises the following steps of enabling the wearable device to receive a voice instruction, and enabling a triggering analysis module to analyze the name of a to-be-found object in the voice instruction, so as to generate an object finding triggering signal; enabling a processing module to search for an image of the object corresponding to the name of the object according to the object finding triggering signal; enabling the processing module to compare the image of the object with at least one reference image in an image database, and judging whether the reference image contains the image of the object or not; finally, when the reference image contains the image of the object, enabling a triggering display module to display at least one reference image, wherein the reference image contains a background image.

Description

Can for finding the wearable device of article and finding article method
Technical field
The present invention has about a kind of for finding the wearable device of article and finding article method, espespecially a kind of by resolving the wearable device of phonetic order searching article and finding article method.
Background technology
Along with the progress that science and technology is maked rapid progress, the electronic installations such as computing machine, mobile phone or flat board are flooded with the life of people, and more make above-mentioned electronic installation more convenient and life-stylize in use along with the development of wireless network, wherein, Wearable device is also along with the development of wireless network is also gradually by people is developed.For example, the blue-tooth earphone etc. can detected the wisdom bracelet of travel distance, can calculate move distance and caloric wisdom footwear, can detect blood oxygen level, makes the life of people more convenient widely.
But, along with people's life-time dilatation and aging, people's memory is also along with aging and fail, often occur forgetting as key, mobile phone, medicine etc. are placed on where, people are caused to need to spend a lot of time searching, but existing Wearable device cannot provide the function that people can be allowed to remember article placement location, and therefore prior art still has the space of improvement.
Summary of the invention
Because existing Wearable device does not possess the problem that assisted user knows the placement location forgeing article.Edge this, the present invention mainly provides a kind of for finding the wearable device of article and finding the method for article, wearable device is mainly utilized to receive phonetic order, and parse for finding the Item Title of article, use and compare with image for future reference, and inform user when judging that image for future reference comprises the article image of found article.
Based on above-mentioned purpose, technical way of the present invention provides a kind of method finding article, it mainly comprises a kind of method finding article, comprise following steps: (a) one wearable device receive a phonetic order, resolve institute in phonetic order and to look for something trigger pip to generate one for an Item Title of article of searching; B (), according to looking for something trigger pip, searches the article image that Item Title is corresponding; C at least one for future reference image of article image in an image database is compared by (), and judge whether the image at least one for future reference in image database comprises article image; And (d) is when at least one image for future reference of this in image database includes article image, shows this at least one image for future reference, wherein, this at least one image for future reference contains a background video.
Wherein, in the preferred embodiment of the attached technological means of the method for above-mentioned searching article, more comprise before step (a), wearable device receives one first audio frequency trigger pip, resolve the Item Title of article in the first audio frequency trigger pip, according to the article image of this first audio frequency trigger pip acquisition article, and set up an article corresponding lists through Item Title and article image, and stored article corresponding lists is in image database.In addition, in step (b), according to the article corresponding lists that trigger pip of looking for something is stored in image database, the article image that Item Title is corresponding is searched.In addition, more comprised before step (a), wearable device receives one second audio frequency trigger pip, resolve a location name in the second audio frequency trigger pip, and the on-site at least one position images of correspondence position title acquisition article, resolve this at least one position images to compare with this at least one image for future reference, analyze the similarity of the background video of this at least one position images and this at least one image for future reference; When the similarity of this at least one position images and background video is greater than a setting value, according to this at least one position images this at least one image for future reference and location name set up that a position is corresponding to be shown, and position correspondence table is stored in image database.
Wherein, in the preferred embodiment of the attached technological means of the method for above-mentioned searching article, in step (d), when showing this at least one image for future reference, further according to position correspondence table, search this location name corresponding at least one image for future reference and give speech play.In addition, more comprise, wearable device is provided with an image capture cycle, and wearable device captures this at least one image for future reference, to be stored in image database according to this image capture cycle.In addition, be wearablely installed on when sensing a unknown article image of unknown article in this at least one image for future reference, a sensing signal be sent to a reminding module, use trigger prompts module and send out a cue.
Based on above-mentioned purpose, technical way of the present invention provides a kind of wearable device for finding article, can comprise a triggering parsing module, an image database, a processing module and a display module for the wearable device finding article.Trigger parsing module for receiving a phonetic order, use and resolve institute in phonetic order and to look for something trigger pip to generate and to send out one for an Item Title of article of searching, image database is electrically connected at triggering parsing module, and storing at least one image for future reference, this at least one image for future reference includes a background video.Processing module is electrically connected at and triggers parsing module and image database, for receiving trigger pip of looking for something to search article image corresponding to Item Title, article image this at least one image for future reference in image database is utilized to compare, and judge whether this at least one image for future reference in this image database comprises article image, uses and sends out this at least one image for future reference when judged result is for being.Display module is electrically connected at processing module, for receiving this at least one image for future reference, uses this at least one image for future reference of display.
Wherein, in the preferred embodiment of the attached technological means of the above-mentioned wearable device for searching article, trigger parsing module more for receiving one first audio frequency trigger pip, and resolve the Item Title of article in the first audio frequency trigger pip, wearable device is according to the article image of this first audio frequency trigger pip acquisition article, and set up an article corresponding lists through Item Title and article image, and article corresponding lists is stored in image database, the article corresponding lists that trigger pip is stored in image database and processing module foundation is looked for something, search the article image that Item Title is corresponding.In addition, processing module is provided with a setting value, trigger parsing module more for receiving one second audio frequency trigger pip, resolve a location name in the second audio frequency trigger pip, and the on-site at least one position images of correspondence position title acquisition article, resolve this at least one position images to compare with this at least one image for future reference, analyze the similarity of the background video of this at least one position images and this at least one image for future reference, when the similarity of this at least one position images and background video is greater than setting value, according to this at least one position images this at least one image for future reference and location name set up that a position is corresponding to be shown, and position correspondence table is stored in image database.
Wherein, in the preferred embodiment of the attached technological means of the above-mentioned wearable device for searching article, more comprise a playing module, playing module is electrically connected at processing module, when this at least one image for future reference that processing module judges in image database comprises article image, the correspondence table of the position stored by image database is utilized to search this location name corresponding at least one image for future reference, and location name is sent to playing module, use when showing this at least one image for future reference, further speech play location name.In addition, more comprise and be provided with an image capture cycle, when reaching the image capture cycle, wearable device captures this at least one image for future reference, with by this at least one image storage for future reference in image database, and processing module when sensing a unknown article image of unknown article in this at least one image for future reference, a sensing signal is sent to a reminding module, uses trigger prompts module and send out a cue.
After technical way by wearable device and searching article method for finding article of the present invention, due to user only need to send out phonetic order wearable device can be made to parse the Item Title for finding article, and further show image informs user when judging that image for future reference comprises the article image of found article, use and help user to find the article passed into silence when user forgets article position, and then bring user facility in life.
Specific embodiment of the present invention, by by following embodiment and to be graphicly further described.
Accompanying drawing explanation
Fig. 1 is the schematic diagram of the wearable device of Gong the searching article of display present pre-ferred embodiments;
Fig. 2 is the block schematic diagram of the wearable device of Gong the searching article of display present pre-ferred embodiments;
Fig. 3 is the schematic diagram of the image for future reference of display present pre-ferred embodiments;
Fig. 4 is the schematic diagram of the article corresponding lists of display present pre-ferred embodiments;
Fig. 5 is the schematic diagram of the position images of display present pre-ferred embodiments;
Fig. 6 is the schematic diagram of the position correspondence table of display present pre-ferred embodiments; And
Fig. 7 is the schematic flow sheet of the method for the searching article of display present pre-ferred embodiments.
Reference numerals illustrates:
1 can for the wearable device finding article
11 trigger parsing module
12 image acquisition modules
13 processing modules
14 image databases
141,141a image for future reference
142 article corresponding lists
The sub-corresponding relation of 1421a, 1421b, 1421c
143 position correspondence tables
1431a, 1431b position corresponding relation
15 display modules
16 reminding modules
17 playing modules
18 time logging modles
100 position images
S1 phonetic order
S2 looks for something trigger pip
S3 first audio frequency trigger pip
S4 second audio frequency trigger pip
S5 the 3rd audio frequency trigger pip
S6 sensing signal
Embodiment
Due to provided by the present invention for finding the wearable device of article and finding article method, its combination embodiment is too numerous to enumerate, therefore this is no longer going to repeat them, only respectively enumerates a preferred embodiment and illustrated.
See also Fig. 1 to Fig. 6, Fig. 1 is the schematic diagram of the wearable device of Gong the searching article of display present pre-ferred embodiments, Fig. 2 is the block schematic diagram of the wearable device of Gong the searching article of display present pre-ferred embodiments, Fig. 3 is the schematic diagram of the image for future reference of display present pre-ferred embodiments, Fig. 4 is the schematic diagram of the article corresponding lists of display present pre-ferred embodiments, Fig. 5 is the schematic diagram of the position images of display present pre-ferred embodiments, and Fig. 6 is the schematic diagram of the position correspondence table of display present pre-ferred embodiments.
As shown in the figure, the wearable device 1 (hereinafter referred to as wearable device) of Gong the searching article that present pre-ferred embodiments provides is glasses, but can be the device that wrist-watch, bracelet, necklace etc. are wearable in other embodiments, wearable device 1 comprises triggering parsing module 11, image acquisition module 12, processing module 13, image database 14, display module 15, reminding module 16, playing module 17 and for the moment interocclusal record module 18.
Trigger parsing module 11 to be realized by the processor generally with analytical capabilities, image acquisition module 12 is such as CCD camera lens or CMOS camera lens, processing module 13 is electrically connected at and triggers parsing module 11 and image acquisition module 12, and such as can be a microprocessor (Micro Control Unit, MCU).
Image database 14 is electrically connected at and triggers parsing module 11 and processing module 13, and can be erasable regulating type ROM (read-only memory) (Erasable Programmable Read Only Memory, EPROM) with non-voltile memory (the Non-Volatile Memory of flash memory, NVRAM), herein not as limit.In addition, image database 14 stores at least one image for future reference 141,141a (only indicates one in Fig. 2, and refer to Fig. 3), article corresponding lists 142 and a position corresponding lists 143, wherein, image 141 for future reference, 141a respectively include a background video, the definition of this background video refers to such as only have the contour image in room, parlor by the image except article image; Image 141 for future reference, for including the image of article and article periphery background, knows the location of article through image for future reference to help user, such as, hang with the parlor image of televisor, be placed with the bed of key.
In addition, article corresponding lists 142 is the corresponding relation of an Item Title and an article image, for example, as shown in Figure 4, three sub-corresponding relation 1421a are included in article corresponding lists 142, 1421b, 1421c (in other embodiments, sub-corresponding relation also can refer to article corresponding lists), sub-corresponding relation 1421a is the corresponding relation of pencil and the article image (that is pencil image) corresponding to pencil, sub-corresponding relation 1421b is the corresponding relation of suitcase and suitcase image (the article image defined in present pre-ferred embodiments), sub-corresponding relation 1421c is the corresponding relation of key and key image (the article image defined in present pre-ferred embodiments).
Position corresponding lists 143 is the corresponding relation of above-mentioned image for future reference, position images and location name, for example, as shown in Figure 6, it includes sub-position corresponding relation 1431a and sub-position corresponding relation 1431b, the corresponding relation of corresponding relation 1431a room, sub-position and image for future reference 141, sub-position corresponding relation 1431b is the corresponding relation of parlor and image 141a for future reference.Wherein, above-mentioned image for future reference 141,141a, an article corresponding lists 142 and a position corresponding lists 143 foundation will under be described in detail again.
Display module 15 is electrically connected at processing module 13, and it such as can be transparent display, or the display of other types, but is good with transparent display in present pre-ferred embodiments, and transparent display is the eyeglass of glasses.Reminding module 16 is electrically connected at processing module 13, playing module 17 is electrically connected at processing module 13, time logging modle 18 is electrically connected at image acquisition module 12 and processing module 13, wherein, reminding module 16, playing module 17 can be the processor generally with processing capacity with time logging modle 18 and realized, and generally speaking, trigger parsing module 11, processing module 13, image database 14, reminding module 16, playing module 17 and time logging modle 18 can be integrated in a chip, but be not limited to this.
More the present invention can be understood in order to make those skilled in the art, below illustrate that the running of each assembly of wearable device 1 describes and finds the method for article in the lump, see also Fig. 1 to Fig. 7, Fig. 7 is the schematic flow sheet of the method for the searching article of display present pre-ferred embodiments, as shown in the figure, the method step of the searching article of present pre-ferred embodiments is as follows:
Step S101: one wearable device receives a phonetic order, resolves institute in phonetic order and to look for something trigger pip to generate one for an Item Title of article of searching;
Step S102: according to looking for something trigger pip, search the article image that Item Title is corresponding;
Step S103: judge whether this at least one image for future reference in image database comprises article image;
Step S104: show this at least one image for future reference; And
Step S105: send out a bulletin signal to notify user.
When user is for finding article, starts to perform the wearable device of step S101 mono-and receiving a phonetic order, resolve institute in phonetic order and to look for something trigger pip to generate one for an Item Title of article of searching.Specifically, in this step, the triggering parsing module 11 of wearable device receives user through the phonetic order S1 spoken or other modes produce (being such as " please look for pencil "), and trigger parsing module 11 to resolve in phonetic order S1 for the Item Title found to generate the trigger pip S2 that looks for something, and this analytic method prior art, no longer repeated at this.
After execution of step S101, performing step S102 according to looking for something trigger pip, searching the article image that Item Title is corresponding.In this step, the trigger pip S2 that looks for something is sent to processing module 13, and processing module 13 can search the article image (namely looking for pencil image in this example) corresponding to Item Title in image database.
After execution of step S102, perform step S103 and judge whether this at least one image for future reference in image database comprises article image.In this step, at least one for future reference image 141 of article image in image database 14,141a compare by processing module 13, and judge image for future reference 141 in image database 14, whether 141a comprise article image.
And when judged result is for being, perform step S104 and show this at least one image for future reference, specifically, because processing module 13 can comparison go out to have article image (namely to have the image of pencil in image 141a for future reference, last cell place in figure), this processing module 13 is sent to the display module 15 mode transmission of signal (can) by image 141a for future reference, make display module 15 can show image 141a for future reference, user can judge the position (such as can learn it is in parlor) of pencil after seeing image 141a for future reference.
And when the judged result of step S103 is no, performs step S105 and send out a bulletin signal to notify user, and this bulletin signal can be at least one in vibration, voice and image, learns in image database 14 not have article image to allow user.
Wherein, in other embodiments, a step S100a is first performed before performing step S101, specifically, the object performing step S100a is the list of setting up article image for user, the triggering parsing module 11 of wearable device 1 receives user through the first audio frequency trigger pip S3 spoken or other modes produce, for example, when user says " pencil " (and can make the image acquisition module 12 of wearable device 1 towards pencil) simultaneously, trigger the Item Title that parsing module 11 resolves article (pencil) in the first audio frequency trigger pip S3, make simultaneously image acquisition module 12 according to first audio frequency trigger pip acquisition article article image (as Fig. 4 illustrate), namely processing module 13 sets up article corresponding lists 143 (that is sub-corresponding relation 1421a as shown in Figure 4 through Item Title and article image, 1421b, 1421c), and article corresponding lists is stored in image database 14.
After execution step S100a, a step S100b can be performed again, specifically, the object performing step S100b is that setting up the corresponding table 143 in position knows the position that article are placed and location name for user, similarly, the triggering parsing module 11 of wearable device 1 receives user through the second audio frequency trigger pip S4 spoken or other modes produce, for example, when user says " room " (and can make the image acquisition module 12 of wearable device 1 towards room) simultaneously, trigger parsing module 11 to resolve the audio frequency trigger pip " room " received, and identify that audio frequency trigger pip " room " is a location name, and image acquisition module 12 may correspond to the on-site at least one position images 100 (image as liied in bed in room) of location name acquisition article, namely processing module 13 resolves position images 100 to compare with at least one image 141 for future reference, and the similarity of the background video of analysis position image 100 and image 141 for future reference, (only have article different by the position images of Fig. 5 and known its of image for future reference 141 of Fig. 3 when position images 100 and the similarity of background video 141 are greater than the setting value that processing module gives tacit consent to, but background video is close, thus judge to be greater than setting value, the definition of this setting value can be the dissimilar quantities of article or the parameter of other video recording analysis), image 141 for future reference and location name (room) are set up a sub-position corresponding relation 1431a and the corresponding table 143 of forming position according to position images 100, and corresponding for position table 143 is stored in image database 14, similarly, how sub-position relationship 1431b in the corresponding table 143 in position makes identical with sub-position corresponding relation 1431a, no longer repeated at this.
It is worth mentioning that here, this can perform a step S100c before being in step S100a again, the object performing step S100c is to set up image 141 for future reference for user, 141a, similarly, the triggering parsing module 11 of wearable device 1 receives user through the 3rd audio frequency trigger pip S5 spoken or other modes produce, for example, when user position is in room and when saying " taking pictures " (and can make the image acquisition module 12 of wearable device 1 towards room) simultaneously, trigger parsing module 11 namely resolve this 3rd audio frequency trigger pip S5 and trigger image acquisition module 12 pick-up image and form image 141 for future reference, 141a, and namely this image for future reference may include multiple article image and background video.
And needing one to carry at this, the order of step S100a, S100b, S100c can be S100c → S100a → S100b, also can be S100a → S100c → S100b, or S100a → S100b → S100c, and it is depending on the design in practice.
If there is being execution step S100a, S100b, under the situation of S100c, in step S102, processing module 13 is further according to the article corresponding lists 142 that the trigger pip S2 that looks for something is stored in image database 14, search the article image that Item Title is corresponding, and in step S104, processing module 13 utilizes the corresponding table 143 in the position stored by image database 14 to search this location name corresponding at least one image 141a for future reference, and location name is sent to the playing module 17 mode transmission of signal (can), use when showing this at least one image 141a for future reference, playing module 17 is speech play location name (namely voice are dialled and put in " parlor ") further.
In addition, in present pre-ferred embodiments, the processing module 13 of wearable device 1 can be provided with an image capture cycle, the processing module 13 of wearable device 1 triggers image acquisition module 12 according to the image capture cycle and captures this at least one image 141 for future reference, 141a, to be stored in image database 14, that is, in above-mentioned step S100c, acquisition image 141 for future reference can be triggered voluntarily by user, 141a, also acquisition image 141 for future reference can automatically be triggered by timing ground, 141a is automatically to upgrade the image for future reference in image database 14, it is depending on the design of practice.
In addition, when user allows to carry out automatic sensing article to image for future reference, the processing module 13 of wearable device 1 is at least one image 141 for future reference, when sensing a unknown article image of unknown article in 141a (such as unknown article are the image that mobile phone senses mobile phone), one sensing signal S6 is sent to reminding module 16, use trigger prompts module 16 and send out a cue, and this cue can be vibration, at least one in voice and image, image 141 for future reference is learnt to allow user, this article image is not had in 141a, use and allow user trigger image acquisition module 12 to capture the unknown article image of these unknown article, and be stored in (that is user triggers execution step S100a) in image database 14 for processing module 13.And in other embodiments, processing module 13 captures except unknown article image except user can be pointed out to carry out, also can be set to automatically trigger acquisition, it is depending on the plan of practical application.
In addition, in present pre-ferred embodiments, time logging modle 18 can capture image 141 for future reference at image acquisition module 12, 141a, article image, during position images 100, (such as Fig. 3 image for future reference 141 is different from the time of position images 100 to be concurrently triggered the time of recording above-mentioned pick-up image, one is 6 points, one is 3 points) this, image for future reference 141 stored in image database 14, 141a, sub-corresponding relation 1421a in article corresponding lists 142, 1421b, sub-position corresponding relation 1431a in the corresponding table 143 of 1421c and position, 1431b can correspond to the image capture time respectively, simultaneously, in step S104, if there are the words of many images for future reference, can be set as demonstrating for future reference image of image capture time closest to current time, or the image for future reference being set as in only display distance current time several days.
Comprehensive the above, after technical way by wearable device and searching article method for finding article of the present invention, due to user only need to send out phonetic order wearable device can be made to parse the Item Title for finding article, and further show image informs user when judging that image for future reference comprises the article image of found article, therefore can really help when user forgets article position user to find, and then bring user facility in life.
By the above detailed description of preferred embodiments, it is desirable to clearly to describe feature of the present invention and spirit, and not with above-mentioned disclosed preferred embodiment, category of the present invention is limited.On the contrary, its objective is wish to contain various change and tool equality be arranged in the present invention institute in the category of right applied for.

Claims (12)

1. find a method for article, comprise following steps:
(a) one wearable device receive a phonetic order, to resolve in this phonetic order to look for something trigger pip to generate one for the Item Title of article found;
B (), according to this trigger pip of looking for something, searches the article image that this Item Title is corresponding;
C at least one for future reference image of this article image in an image database is compared by (), and judge whether this at least one image for future reference in this image database comprises this article image; And
D (), when at least one image for future reference of this in this image database includes this article image, shows this at least one image for future reference, wherein, this at least one image for future reference includes a background video.
2. the method finding article as claimed in claim 1, it is characterized by, more comprised before this step (a), this wearable device receives one first audio frequency trigger pip, resolve this Item Title of these article in this first audio frequency trigger pip, and this article image of these article is captured according to this first audio frequency trigger pip, set up an article corresponding lists through this Item Title and this article image, and store this article corresponding lists in this image database.
3. the method finding article as claimed in claim 2, is characterized by, and in this step (b), according to this article corresponding lists that this trigger pip of looking for something is stored in this image database, searches this article image that this Item Title is corresponding.
4. the method finding article as claimed in claim 1, it is characterized by, more comprised before this step (a), this wearable device receives one second audio frequency trigger pip, resolve a location name in this second audio frequency trigger pip, and to location name capturing the on-site at least one position images of these article, resolve this at least one position images to compare with this at least one image for future reference, analyze the similarity of this background video of this at least one position images and this at least one image for future reference; When the similarity of this at least one position images and this background video is greater than a setting value, according to this at least one position images by corresponding with this location name for this at least one image for future reference to set up the corresponding table in a position, and this position correspondence table is stored in this image database.
5. the method finding article as claimed in claim 4, it is characterized by, in step (d), when showing this at least one image for future reference, further according to this position correspondence table, search this this location name corresponding at least one image for future reference and give speech play.
6. the method finding article as claimed in claim 1, it is characterized by, the method more comprises, and this wearable device is provided with an image capture cycle, this wearable device captures this at least one image for future reference, to be stored in this image database according to this image capture cycle.
7. for the wearable device finding article, can comprise:
One triggers parsing module, and it is for receiving a phonetic order, uses to resolve institute in this phonetic order and to look for something trigger pip to generate one for an Item Title of article of searching;
One image database, it is electrically connected at this triggering parsing module, and stores at least one image for future reference, and this at least one image for future reference includes a background video;
One processing module, it is electrically connected at this triggering parsing module and this image database, for according to this trigger pip of looking for something to search an article image corresponding to this Item Title, this article image this at least one image for future reference in this image database is compared, and judge whether this at least one image for future reference in this image database comprises this article image, uses and sends out this at least one image for future reference when judged result is for being; And
One display module, it is electrically connected at this processing module, for receiving this at least one image for future reference, uses this at least one image for future reference of display.
8. as claimed in claim 7 can for the wearable device finding article, it is characterized by, this triggering parsing module is more for receiving one first audio frequency trigger pip, and resolve this Item Title of these article in this first audio frequency trigger pip, this wearable device captures this article image of these article according to this first audio frequency trigger pip, and set up an article corresponding lists through this Item Title and this article image, and this article corresponding lists is stored in this image database.
9. for finding the wearable device of article, can it is characterized by as claimed in claim 8, this processing module, according to this article corresponding lists stored in this image database of this trigger pip of looking for something, searches this article image that this Item Title is corresponding.
10. as claimed in claim 7 can for the wearable installation method finding article, it is characterized by, this processing module is provided with a setting value, this triggering parsing module is more for receiving one second audio frequency trigger pip, resolve a location name in this second audio frequency trigger pip, and to location name capturing the on-site at least one position images of these article, resolve this at least one position images to compare with this at least one image for future reference, analyze the similarity of this background video of this at least one position images and this at least one image for future reference, when the similarity of this at least one position images and this background video is greater than this setting value, according to this at least one position images by corresponding with this location name for this at least one image for future reference to set up the corresponding table in a position, and this position correspondence table is stored in this image database.
11. as claimed in claim 10 can for the wearable device finding article, it is characterized by, this device more comprises a playing module, this playing module is electrically connected at this processing module, when this at least one image for future reference that this processing module judges in this image database comprises this article image, this position correspondence table stored by this image database is utilized to search this this location name corresponding at least one image for future reference, and this location name is sent to this playing module, use when showing this at least one image for future reference, this location name of further speech play.
12. as claimed in claim 7 can for the wearable device finding article, it is characterized by, this device more comprises and is provided with an image capture cycle, when reaching this image capture cycle, this wearable device captures this at least one image for future reference, with by this at least one image storage for future reference in this image database.
CN201410503029.1A 2014-09-26 2014-09-26 Wearable device capable of finding objects and object finding method Pending CN104317815A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410503029.1A CN104317815A (en) 2014-09-26 2014-09-26 Wearable device capable of finding objects and object finding method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410503029.1A CN104317815A (en) 2014-09-26 2014-09-26 Wearable device capable of finding objects and object finding method

Publications (1)

Publication Number Publication Date
CN104317815A true CN104317815A (en) 2015-01-28

Family

ID=52373047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410503029.1A Pending CN104317815A (en) 2014-09-26 2014-09-26 Wearable device capable of finding objects and object finding method

Country Status (1)

Country Link
CN (1) CN104317815A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105652826A (en) * 2015-07-31 2016-06-08 宇龙计算机通信科技(深圳)有限公司 Intelligent household control method, controller, mobile terminal and system thereof
WO2016184104A1 (en) * 2015-05-18 2016-11-24 小米科技有限责任公司 Method and apparatus for identifying object
CN107133573A (en) * 2017-04-12 2017-09-05 宇龙计算机通信科技(深圳)有限公司 A kind of method and apparatus for finding article
CN107977625A (en) * 2017-11-30 2018-05-01 速感科技(北京)有限公司 A kind of intelligent movable equipment looked for something and intelligent method of looking for something
CN108009583A (en) * 2017-11-30 2018-05-08 速感科技(北京)有限公司 A kind of intelligent movable equipment looked for something and intelligent method of looking for something
CN109029449A (en) * 2018-06-29 2018-12-18 英华达(上海)科技有限公司 It looks for something method, device for searching article and system of looking for something

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093333A (en) * 2011-11-04 2013-05-08 英业达股份有限公司 Life reminding method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093333A (en) * 2011-11-04 2013-05-08 英业达股份有限公司 Life reminding method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016184104A1 (en) * 2015-05-18 2016-11-24 小米科技有限责任公司 Method and apparatus for identifying object
JP2017526089A (en) * 2015-05-18 2017-09-07 小米科技有限責任公司Xiaomi Inc. Object identification method, apparatus, program, and recording medium
CN105652826A (en) * 2015-07-31 2016-06-08 宇龙计算机通信科技(深圳)有限公司 Intelligent household control method, controller, mobile terminal and system thereof
CN107133573A (en) * 2017-04-12 2017-09-05 宇龙计算机通信科技(深圳)有限公司 A kind of method and apparatus for finding article
CN107977625A (en) * 2017-11-30 2018-05-01 速感科技(北京)有限公司 A kind of intelligent movable equipment looked for something and intelligent method of looking for something
CN108009583A (en) * 2017-11-30 2018-05-08 速感科技(北京)有限公司 A kind of intelligent movable equipment looked for something and intelligent method of looking for something
CN109029449A (en) * 2018-06-29 2018-12-18 英华达(上海)科技有限公司 It looks for something method, device for searching article and system of looking for something
CN109029449B (en) * 2018-06-29 2020-09-29 英华达(上海)科技有限公司 Object searching method, object searching device and object searching system

Similar Documents

Publication Publication Date Title
CN104317815A (en) Wearable device capable of finding objects and object finding method
US10967520B1 (en) Multimodal object identification
CN110471858B (en) Application program testing method, device and storage medium
KR101367964B1 (en) Method for recognizing user-context by using mutimodal sensors
CN110162770A (en) A kind of word extended method, device, equipment and medium
US20120050530A1 (en) Use camera to augment input for portable electronic device
WO2015042270A1 (en) Using sensor inputs from a computing device to determine search query
KR20140028540A (en) Display device and speech search method thereof
US20150161249A1 (en) Finding personal meaning in unstructured user data
CN104123937A (en) Method, device and system for reminding setting
US10169702B2 (en) Method for searching relevant images via active learning, electronic device using the same
US20150161236A1 (en) Recording context for conducting searches
RU2012141988A (en) AUTOMATIC RECOGNITION AND RECORDING
CN111968635B (en) Speech recognition method, device and storage medium
CN109756770A (en) Video display process realizes word or the re-reading method and electronic equipment of sentence
US11030994B2 (en) Selective activation of smaller resource footprint automatic speech recognition engines by predicting a domain topic based on a time since a previous communication
WO2017167088A1 (en) A user relationship based multimedia recommendation method and apparatus
CN111611490A (en) Resource searching method, device, equipment and storage medium
CN104239445A (en) Method and device for representing search results
CN104715753A (en) Data processing method and electronic device
CN110334271A (en) A kind of search result optimization method, system, electronic equipment and storage medium
KR20180121273A (en) Method for outputting content corresponding to object and electronic device thereof
JP2022503255A (en) Voice information processing methods, devices, programs and recording media
CN113596601A (en) Video picture positioning method, related device, equipment and storage medium
CN113505256B (en) Feature extraction network training method, image processing method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150128