CN112686166B - Lost article detection and prompt method based on limited source data - Google Patents

Lost article detection and prompt method based on limited source data Download PDF

Info

Publication number
CN112686166B
CN112686166B CN202011629138.XA CN202011629138A CN112686166B CN 112686166 B CN112686166 B CN 112686166B CN 202011629138 A CN202011629138 A CN 202011629138A CN 112686166 B CN112686166 B CN 112686166B
Authority
CN
China
Prior art keywords
user
article
data
bracelet
prompt
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011629138.XA
Other languages
Chinese (zh)
Other versions
CN112686166A (en
Inventor
敖邦乾
梁定勇
敖帮桃
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zunyi Normal University
Original Assignee
Zunyi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zunyi Normal University filed Critical Zunyi Normal University
Priority to CN202011629138.XA priority Critical patent/CN112686166B/en
Publication of CN112686166A publication Critical patent/CN112686166A/en
Application granted granted Critical
Publication of CN112686166B publication Critical patent/CN112686166B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention relates to the technical field of neural networks, in particular to a method for detecting and prompting lost articles based on limited source data, which comprises the following steps: a fixed-point monitoring step: detecting whether a preset target appears in a monitoring area or not through a camera, and if so, reminding a user; the collection step comprises: collecting gesture actions of a user through a bracelet worn by the user; a comparison step: comparing the gesture action with gesture actions in a database to obtain an operation action corresponding to the gesture action; a judging step: judging whether the type of the operation action is an operation action for placing an article, and if the type of the operation action is the operation action for placing the article, recording the position and time of a user when the gesture action is implemented as prompt point data; a checking step: if the user has lost articles, the prompt point data can be checked through the bracelet. The scheme can better detect different characteristics of living creatures and articles, and further help a user to find lost articles.

Description

Lost article detection and prompt method based on limited source data
Technical Field
The invention relates to the technical field of neural networks, in particular to a method for detecting and prompting a lost object based on limited source data.
Background
In daily life, people often lose things, and the lost things can be articles and live pets. Often, there may be situations where an item is not found, and the reason for this is often because the user has forgotten where to place the item. Compared with some electronic products, the electronic products can be found in some specific ways, for example, a mobile phone can be found by making a call or positioning, and the item searching of non-electronic products can only depend on the user to recall where the item is placed. Especially, the old and children can easily forget where the articles are placed and can hardly recall. For pets, a person may not be able to get home because of their intelligence, but may be present in a frequently played place.
Therefore, a method for detecting and prompting the lost article based on the limited source data, which can help the user to find the lost article, is provided for the phenomenon.
Disclosure of Invention
The invention aims to provide a lost article detection and prompt method based on limited source data, which can help a user to find a lost article.
The basic scheme provided by the invention is as follows: the method for detecting and prompting the lost object based on the limited source data comprises the following steps:
a fixed-point monitoring step: installing a camera at a fixed point, detecting whether a preset target appears in a monitoring area through the camera, and if so, sending prompt information and target position information to a user;
the collection step comprises: collecting gesture actions of a user through a bracelet worn by the user;
a comparison step: comparing the gesture action with gesture actions in a database to obtain an operation action corresponding to the gesture action;
a judging step: judging whether the type of the operation action is the operation action for placing the article, and if the type of the operation action is the operation action for placing the article, recording the position and time of the user when the gesture action is implemented as prompt point data;
a checking step: if the user has lost articles, the prompt point data can be checked through the bracelet.
Compared with the prior art, the scheme has the advantages that: 1. the fixed-point monitoring step is mainly aimed at living creatures, such as living pets, a pet is set as a target, a camera is installed at a place where the pet is frequently taken for a walk or some places where the pet is likely to appear, when the pet is unfortunate and lost, if the pet passes through the place where the camera is arranged, the camera detects that a preset target appears in a monitoring area, prompt information and target position information are sent to a user, and the user can find the lost pet at a corresponding position in time.
2. The method comprises the steps of collecting the gesture actions of a user through a bracelet worn by the user, comparing the collected gesture actions with the gesture actions in a database to obtain operation actions corresponding to the gesture actions, judging whether the operation actions are the operation actions for placing the articles, recording the position and time of the user when the gesture actions are carried out if the operation actions are the operation actions for placing the articles, using the position and time as prompt point data to record the daily placing articles of the user, and if the user loses the articles, remembering the position where the articles are placed by the user through looking up the prompt point data, further reducing the searching range and finding lost articles more quickly. Especially for non-electronic products, the product does not have the functions of positioning and the like, if the non-electronic products are lost, the user can only completely remember where the articles are placed or search everywhere in a submarine fishing needle mode by self, and after the scheme is adopted, the user can remember the positions where the articles are placed according to the record of the bracelet, so that the searching without head is avoided.
3. Different recording detection methods are adopted for the living creatures and the articles, so that different characteristics of the living creatures and the articles can be better detected, and the user can be helped to find out the lost living creatures and articles.
Further, the fixed-point monitoring step includes the following steps:
the camera adopts a network camera, a detection frame is arranged in an area monitored by the network camera, and a shooting area is planned according to the wide angle and image pixels of the network camera;
if a set target is detected in the image, tracking the target by the network camera, returning four pixel values of four corners of a frame with the minimum frame selection target according to the network model, and calculating a central coordinate pixel of the target according to the four pixel values of the four corners of the frame;
when the central coordinate pixel belongs to the range of the detection frame, prompt information and target position information are sent to a user.
Has the advantages that: the lost pet can be well detected through the fixed-point monitoring step, a user can install the network cameras at places where the pet is frequently taken for a walk or places where the pet is likely to appear, and can use a plurality of network cameras at the same time.
Further, the network model is designed to adopt a VGG-16 model as a basic network model, analyze and reserve the neural nodes of which the weight between the basic network model and a preset target exceeds a preset threshold value, remove other nodes, and initialize the parameters of the network model by using the trained basic network model.
Has the advantages that: the network model adopts a VGG-16 model as a basic network model, neural nodes with weights between the basic network model and a preset target exceeding a preset threshold value are analyzed and reserved, other nodes are removed, and parameters of a target network are initialized by using a trained network model, so that the learning efficiency of the network model is accelerated and optimized, training and learning of a newly designed network model from zero are not needed, a good initial value of the network model can be obtained, and especially under the condition that enough label data do not exist, the accuracy reduction caused by data shortage can be greatly improved.
Further, the following contents are also included between the judging step and the viewing step:
an input step: a user inputs a name of an article to be placed, and the name of the article is stored in corresponding prompt point data;
has the beneficial effects that: after the articles of the user are lost, when the prompt point data is checked through the bracelet, the user can clearly know what the articles corresponding to the position and the time in the prompt point data are, and then the user can be helped to quickly find the lost articles which the user wants to find.
Further, the viewing step further comprises: when the user looks over cue point data through the bracelet, the display mode of selectable cue point information, the display mode includes: a time display module and an article display mode; the time display mode is the time when the gesture action is implemented and is recorded according to the prompt point data, and the time and the current time interval are displayed from long to short or from short to long; and the article display mode is displayed according to the article names in each prompt point data in an initial sequence.
Has the beneficial effects that: when the user looks up the point data of suggestion through the bracelet, the display mode of selectable point information, two kinds of display modes, the user can select according to the demand of oneself to the user can seek the point data of suggestion that oneself needs more fast. If the user accidentally loses the article and does not input the name of the article when placing the article, the time display mode can be selected to check the prompt point data, and then the user is helped to remember where the lost article is placed. When the user looks over the point data of suggestion through the bracelet, can select the display mode of tip information according to self demand, if the user knows what the lost article is and input the name of article when placing the lost article, then can select the article display mode to the quick position of finding the lost article.
Further, the viewing step further comprises: when a user checks the prompt point data through the bracelet, the user inputs the name of an article to search the corresponding prompt point data.
Has the beneficial effects that: when searching the prompt point data, the user can directly input the name of the article for searching, and the method is more convenient and faster.
Further, the method for detecting and prompting the lost object based on the limited source data further comprises the following steps:
an article recording step: a user sets prompt time through a bracelet and inputs the name of an article to be carried;
a prompting step: if the current time is the prompting time, the bracelet prompts the name of the article to be carried by the user, and enables the user to confirm whether the article to be carried is carried, and if the article which is not confirmed by the user exists, the prompting point data of the article is searched and displayed.
Has the advantages that: the user is prompted to carry the articles to be carried, so that unnecessary troubles of the user are reduced. If the current time is the prompting time, the bracelet prompts the name of an article needing to be carried by the user, and enables the user to confirm whether the article needing to be carried is carried or not, if the article which is not confirmed by the user exists, the prompting point data of the article is searched and displayed, the user who is about to go out can be better helped, and the time for the user to search the prompting point data of the article which is not carried is reduced.
Further, the method for detecting and prompting the lost object based on the limited source data further comprises the following steps:
a motion checking step: detecting whether the motion state data of the user exceeds a preset threshold value, if so, judging that the user is moving, and recording the motion track of the user, wherein the motion state data comprises bracelet acceleration data, bracelet vibration data and user heart rate;
a motion trail query step: the user can view the motion trail through the bracelet.
Has the advantages that: when the user moves, the user can throw away the articles on the body, so that whether the movement state data of the user exceeds a preset threshold value or not is detected, if the movement state data of the user exceeds the preset threshold value, the user is judged to move, the movement track of the user is recorded, and when the user finds that the articles are lost, the user can delay the movement track to search if the user feels that the articles are lost during movement.
Further, the method for detecting and prompting the lost object based on the limited source data further comprises the following steps:
and data deletion step: and detecting whether the prompt point data of the name of the same article in the prompt point data exceeds three, if so, keeping three data with the time closest to the current time in the prompt point data, and deleting the rest data.
Has the beneficial effects that: considering that a user can often place the same article and can be recorded every time, but the recorded prompt point data needs a large storage amount and is not convenient for the user to search the prompt point data, three data of time and current time in the prompt point data are reserved, the rest data are deleted, and the effect of updating the storage is achieved while the memory is cleaned.
Drawings
FIG. 1 is a flowchart illustrating a method for detecting and prompting a missing object based on limited source data according to an embodiment;
FIG. 2 is a schematic diagram of a VGG-16 model for an embodiment of a finite source data based loss detection and prompting method;
FIG. 3 is a schematic diagram of a freezing and fine-tuning VGG-16 model of an embodiment of a finite source data-based missing object detection and prompting method;
FIG. 4 is a graph comparing model accuracy and loss function before and after migration learning of a finite source data-based method for detecting and prompting a missing object;
fig. 5 is a schematic diagram of a lost pet real-time detection method based on finite source data.
Detailed Description
An embodiment substantially as shown in figure 1: the method for detecting and prompting the lost object based on the limited source data comprises the following steps:
a fixed-point monitoring step: the method comprises the steps that a camera is installed at a fixed point, whether a preset target appears in a monitoring area or not is detected through the camera, and if the preset target appears, prompt information and target position information are sent to a user;
the camera adopts a network camera, a detection frame is arranged in an area monitored by the network camera, and a shooting area is planned according to the wide angle and image pixels of the network camera; if the characteristic target is detected in the image, the network camera carries out on the target, and simultaneously returns four pixel values of four corners of a frame with the minimum frame selection target according to the network model, and the central coordinate pixel of the target is calculated according to the four pixel values of the four corners of the frame; when the central coordinate pixel belongs to the range of the detection frame, prompt information and target position information are sent to a user.
The network model is designed by adopting a VGG-16 model as a basic network model, analyzing and reserving the neural nodes with the weight value between the basic network model and a preset target exceeding a preset threshold value, removing other nodes, and initializing the parameters of the network model by using the trained basic network model.
In this embodiment, a network model is designed with a pet cat as a predetermined target, and the specific contents are as follows:
step 1, selecting a VGG-16 model as a basic network model, wherein the first few layers only extract some simple picture basic features such as points, lines, diagonals and the like, and basically have no influence on the subsequent classification, so that the layers from 'conv-64' to 'conv-256' and the subsequent 'maxpool' are frozen, weight updating of related nodes in the layers during training is avoided by setting layer.
Step 2, collecting or downloading a picture set of the relevant cats, compiling and training the VGG-16 model, wherein the training parameters are shown in the following table:
Figure BDA0002879732810000051
in a Tensorflow architecture, searching a trained weight value table 'WEIGHTS _ PATH _ NO _ TOP', and focusing on nodes with larger weight values or smaller weight values;
step 3, unsealing the convolution layer frozen by Step 1, finely adjusting the VGG-16 model, reserving nodes with larger weight values related to the cat, and removing some nodes with obvious incoherence or smaller weight values, as shown in figure 3, further slimming the VGG-16 model;
step 4, using the unique data provided by the user, namely, photos or videos that many owners have pets, as a data source, a new data set is constructed, and the new model is compiled and trained in the new model by using the data set, so that the output of the new model is only a special class provided by the user, but not a general class. The precision and loss function of the test before and after the model is fine-tuned, as shown in fig. 4, as can be seen from fig. 3, the initially used data set can make the accuracy of the model higher and higher, and the loss function becomes smaller and smaller slowly, after the fine tuning of the scheme is used, because the new data set is used and part of nodes are deleted, the accuracy of the test on a single target is obviously reduced at the beginning, and the loss function is larger, but along with the continuation of the training, the accuracy of the test on the single target is greatly improved, the accuracy of the final detection can reach 98.3%, and meanwhile, the loss function can be controlled below 5%, so that the identification requirement can be completely met.
The collection step comprises: collecting gesture actions of a user through a bracelet worn by the user; be provided with inertial sensor in the bracelet, the user wears the bracelet on hand, and the bracelet moves along with user's hand action, and then gathers the information when the bracelet moves through the inertial sensor that sets up in the bracelet.
A comparison step: comparing the gesture action with gesture actions in a database to obtain an operation action corresponding to the gesture action; the database adopts a cloud database, information is compared with information contained in each gesture action stored in the cloud database, and then the operation action corresponding to the gesture action of the user is judged.
A judging step: and judging whether the type of the operation action is the operation action for placing the article, and if the type of the operation action is the operation action for placing the article, recording the position and time of the user when the gesture action is implemented as prompt point data.
An input step: a user inputs the name of an article through a display screen of the bracelet, and the name of the article is stored in corresponding prompt point data; i.e. the name of the item, the location and time of the user when the gesture action was performed are stored as a set of data.
A checking step: if the user has lost articles, the prompt point data can be checked through the bracelet. When the user looks over cue point data through the bracelet, the display mode of selectable cue point information, the display mode includes: a time display module and an article display mode. And the time display mode is displayed according to the time when the gesture action is implemented and the time and the current time interval, which are recorded according to the prompt point data, and the user can select the time interval to be displayed from long to short or from short to long. And the article display mode is that the article names in the prompt point data are displayed according to the first letter sequence. When a user checks the prompt point data through the bracelet, the user can input the name of an article to search the corresponding prompt point data.
And data deletion step: and detecting whether the prompt point data of the name of the same article in the prompt point data exceeds three, if so, keeping three data with the time closest to the current time in the prompt point data, and deleting the rest data.
The specific use is as follows: for living creatures, pet is taken as an example:
firstly, the network cameras are installed in places where pets are frequently taken for a walk or some places where pets are likely to appear, and a plurality of network cameras can be used at the same time;
then, designing a detection frame (Detect box) in the area monitored by the network camera, as shown in fig. 5, covering as many ground areas as possible according to the wide angle and picture pixels of the network camera to increase the detection range;
then, when the network detects a characteristic target, tracking the target, and simultaneously returning four pixel values of the target position according to the network model: (x) min ,y min ),(x min ,y max ),(x max ,y min ),(x max ,y max ) Thus, the center coordinate pixel of the target can be calculated as (x, y):
Figure BDA0002879732810000061
Figure BDA0002879732810000062
and finally, when (x, y) appears in the range of the detection frame, triggering a detection warning function, sending prompt information and target position information to a mobile phone of a user through a network by using an API (application program interface) made by Twillio, and then enabling the user to timely find lost pets at corresponding positions.
Aiming at the loss of the object class:
firstly, a user wears a bracelet, and in daily life, gesture actions of the user are collected through the bracelet; then comparing the gesture action with gesture actions in a database to obtain an operation action corresponding to the gesture action, judging whether the type of the operation action is the operation action for placing an article, and if the type of the operation action is the operation action for placing the article, recording the position and time of a user when the gesture action is implemented as cue point data; then, the bracelet prompts the user that prompt point data exists, the user inputs the name of the placed object through a display screen of the bracelet, and the name of the object is stored in the corresponding prompt point data; finally, if the user forgets the position where the article is placed, the user can check the prompting point data through the bracelet and find the article according to the prompting point data.
When checking the point data, if the user inputs the name of the article when placing the article, the point data of the article can be searched by inputting the name of the article through the bracelet, the display mode of the point data is displayed through the bracelet, the user can select the time display module or the article display mode according to the actual situation, and the time display mode can also select to display from long to short or from short to long according to the time interval.
The lost article detection and prompting method based on the limited source data records daily placed articles of a user, and if the user loses the articles, the user can remember the positions where the articles are placed back and forth by looking up the prompting point data, so that the searching range is narrowed, and the lost articles can be found more quickly. Especially for non-electronic products, the products do not have the functions of positioning and the like, if the products are lost, the users can only completely remember where the articles are placed or search everywhere in a submarine fishing needle mode by themselves, and after the method is adopted, the users can remember the positions where the articles are placed according to the records of the hand rings, so that the searching without head is avoided.
Example two
On the basis of the first embodiment, the present embodiment further includes: an article recording step: a user sets prompt time through a bracelet and inputs the name of an article to be carried;
a prompting step: if the current time is the prompting time, the bracelet prompts the name of the article needing to be carried by the user, the user is made to confirm whether the article needing to be carried is carried, and if the article which is not confirmed by the user exists, the prompting point data of the article is searched and displayed.
The specific use is as follows: taking going out to an airport as an example, a user sets a prompt time in advance on a bracelet, and the prompt time is before the user goes out, and also inputs names of articles to be carried, such as passports, identity cards and luggage cases; when the prompting time is up, the bracelet prompts the name of an article which needs to be carried by the user, and enables the user to confirm whether the article is carried, if the user does not confirm the article, for example, the user confirms that the user carries an identity card and luggage, and does not confirm to carry a passport, the bracelet searches and displays prompting point data with the name of the article as the passport in the cloud database, and helps the user to quickly find the article while prompting the user to carry the article.
EXAMPLE III
On the basis of the first embodiment, the present embodiment further includes: a motion checking step: detecting whether the motion state data of the user exceeds a preset threshold value, if so, judging that the user moves, and recording the motion trail of the user, wherein the motion state data comprises bracelet acceleration data, bracelet vibration data and the heart rate of the user; the inertial sensor in the bracelet detects bracelet acceleration data and bracelet vibration data, adopts the photoelectric method to monitor user's rhythm of the heart to judge whether the user is in the motion state through current algorithm according to these three data, if the user is in the positioning system through the bracelet, record user's motion trail.
A motion track query step: the user can view the motion trail through the bracelet.
The specific use is as follows: when the user moves, due to the fact that the body shakes violently, the object placed on the body is easy to shake and cannot be perceived by the user, the user can record the movement track of the user after the user moves, and the user can search for the object according to the movement track if the user finds that the object is lost after the user moves.
The foregoing is merely an example of the present invention and common general knowledge of known specific structures and features of the embodiments is not described herein in any greater detail. It should be noted that, for those skilled in the art, without departing from the structure of the present invention, several variations and modifications can be made, which should also be considered as the protection scope of the present invention, and these will not affect the effect of the implementation of the present invention and the utility of the patent. The scope of the claims of the present application shall be determined by the contents of the claims, and the description of the embodiments and the like in the specification shall be used to explain the contents of the claims.

Claims (7)

1. A method for detecting and prompting lost articles based on limited source data is characterized in that: the method comprises the following steps:
a fixed-point monitoring step: the method comprises the steps that a camera is installed at a fixed point, whether a preset target appears in a monitoring area or not is detected through the camera, and if the preset target appears, prompt information and target position information are sent to a user;
the collection step comprises: collecting gesture actions of a user through a bracelet worn by the user;
a comparison step: comparing the gesture action with gesture actions in a database to obtain an operation action corresponding to the gesture action;
a judging step: judging whether the type of the operation action is an operation action for placing an article, and if the type of the operation action is the operation action for placing the article, recording the position and time of a user when the gesture action is implemented as prompt point data;
an input step: the user inputs the name of the article to be placed, and the name of the article is stored in the corresponding prompt point data;
a checking step: if the user has lost articles, the prompt point data can be checked through the bracelet;
an article recording step: a user sets prompt time through a bracelet and inputs the name of an article to be carried;
a prompting step: if the current time is the prompting time, the bracelet prompts the name of the article to be carried by the user, and enables the user to confirm whether the article to be carried is carried, and if the article which is not confirmed by the user exists, the prompting point data of the article is searched and displayed.
2. The finite-source-data-based missing object detection and prompting method according to claim 1, characterized in that: the fixed-point monitoring step comprises the following steps:
the camera adopts a network camera, a detection frame is arranged in an area monitored by the network camera, and a shooting area is planned according to the wide angle and image pixels of the network camera;
if a set target is detected in the image, tracking the target by the network camera, returning four pixel values of four corners of a frame with the minimum frame selection target according to the network model, and calculating a central coordinate pixel of the target according to the four pixel values of the four corners of the frame;
when the central coordinate pixel belongs to the range of the detection frame, prompt information and target position information are sent to a user.
3. The finite-source-data-based missing object detection and prompting method according to claim 2, characterized in that: the network model is designed by adopting a VGG-16 model as a basic network model, analyzing and reserving the neural nodes with the weight value between the basic network model and a preset target exceeding a preset threshold value, removing other nodes, and initializing the parameters of the network model by using the trained basic network model.
4. The finite-source-data-based missing object detection and prompting method according to claim 1, characterized in that: the viewing step further comprises: when the user looks over cue point data through the bracelet, the display mode of selectable cue point information, the display mode includes: a time display module and an article display mode; the time display mode is that the time when the gesture action is implemented is recorded according to the prompt point data, and the time and the current time interval are displayed from long to short or from short to long; and the article display mode is displayed according to the article names in each prompt point data in an initial sequence.
5. The finite-source-data-based missing object detection and prompting method according to claim 1, characterized in that: the viewing step further comprises: when the user checks the prompt point data through the bracelet, the user can input the name of an article to search the corresponding prompt point data.
6. The finite-source-data-based missing object detection and prompting method according to claim 1, characterized in that: the method also comprises the following steps:
a motion checking step: detecting whether the motion state data of the user exceeds a preset threshold value, if so, judging that the user is moving, and recording the motion track of the user, wherein the motion state data comprises bracelet acceleration data, bracelet vibration data and the heart rate of the user;
a motion track query step: the user can view the motion trail through the bracelet.
7. The finite-source-data-based missing object detection and prompting method according to claim 1, characterized in that: the method also comprises the following steps:
and data deletion step: and detecting whether the prompt point data of the name of the same article in the prompt point data exceeds three, if so, keeping three data with the time closest to the current time in the prompt point data, and deleting the rest data.
CN202011629138.XA 2020-12-31 2020-12-31 Lost article detection and prompt method based on limited source data Active CN112686166B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011629138.XA CN112686166B (en) 2020-12-31 2020-12-31 Lost article detection and prompt method based on limited source data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011629138.XA CN112686166B (en) 2020-12-31 2020-12-31 Lost article detection and prompt method based on limited source data

Publications (2)

Publication Number Publication Date
CN112686166A CN112686166A (en) 2021-04-20
CN112686166B true CN112686166B (en) 2023-04-18

Family

ID=75455864

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011629138.XA Active CN112686166B (en) 2020-12-31 2020-12-31 Lost article detection and prompt method based on limited source data

Country Status (1)

Country Link
CN (1) CN112686166B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106131315A (en) * 2016-06-24 2016-11-16 努比亚技术有限公司 A kind of alarm set and method
CN106581951A (en) * 2016-12-30 2017-04-26 杭州联络互动信息科技股份有限公司 Method and device for recording motion parameters by smartwatch
CN208030482U (en) * 2018-02-08 2018-11-02 武汉瑞拉博利科技有限公司 A kind of multifunctional intellectual bracelet
CN109240490A (en) * 2018-08-14 2019-01-18 歌尔科技有限公司 Intelligent wearable device and the interaction control method based on it, system
CN209946644U (en) * 2018-12-18 2020-01-14 柳州市潭中人民医院 Medical timer

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104571506A (en) * 2014-12-25 2015-04-29 西安电子科技大学 Smart watch based on action recognition and action recognition method
CN105759975B (en) * 2016-03-11 2019-04-19 深圳还是威健康科技有限公司 A kind of Intelligent bracelet loses reminding method and device
CN106603720A (en) * 2017-01-10 2017-04-26 惠州Tcl移动通信有限公司 Schedule setting and synchronizing method and schedule setting and synchronizing system based on smart watch
CN108875588B (en) * 2018-05-25 2022-04-15 武汉大学 Cross-camera pedestrian detection tracking method based on deep learning
CN109034094A (en) * 2018-08-10 2018-12-18 佛山市泽胜科技有限公司 A kind of articles seeking method and apparatus
CN112073573A (en) * 2020-07-14 2020-12-11 奇酷互联网络科技(深圳)有限公司 Reminding method and device applied to intelligent wearable device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106131315A (en) * 2016-06-24 2016-11-16 努比亚技术有限公司 A kind of alarm set and method
CN106581951A (en) * 2016-12-30 2017-04-26 杭州联络互动信息科技股份有限公司 Method and device for recording motion parameters by smartwatch
CN208030482U (en) * 2018-02-08 2018-11-02 武汉瑞拉博利科技有限公司 A kind of multifunctional intellectual bracelet
CN109240490A (en) * 2018-08-14 2019-01-18 歌尔科技有限公司 Intelligent wearable device and the interaction control method based on it, system
CN209946644U (en) * 2018-12-18 2020-01-14 柳州市潭中人民医院 Medical timer

Also Published As

Publication number Publication date
CN112686166A (en) 2021-04-20

Similar Documents

Publication Publication Date Title
US7340079B2 (en) Image recognition apparatus, image recognition processing method, and image recognition program
CN111291589B (en) Information association analysis method and device, storage medium and electronic device
US10752213B2 (en) Detecting an event and automatically obtaining video data
CN109643158A (en) It is analyzed using multi-modal signal and carries out command process
KR20170080454A (en) System and method for providing an on-chip context aware contact list
CN111222373B (en) Personnel behavior analysis method and device and electronic equipment
JPWO2014132841A1 (en) Person search method and home staying person search device
KR101652261B1 (en) Method for detecting object using camera
CN110390031A (en) Information processing method and device, vision facilities and storage medium
CN105404849B (en) Using associative memory sorted pictures to obtain a measure of pose
CN106052294A (en) Refrigerator and method for judging change of objects in object storage area of refrigerator
CA3029643A1 (en) System and method for automatically detecting and classifying an animal in an image
US20210319226A1 (en) Face clustering in video streams
CN112911204A (en) Monitoring method, monitoring device, storage medium and electronic equipment
JPWO2019211932A1 (en) Information processing equipment, information processing methods, programs, and autonomous behavior robot control systems
CN101651824A (en) Mobile object monitoring device
Şaykol et al. Scenario-based query processing for video-surveillance archives
CN109255360A (en) A kind of objective classification method, apparatus and system
CN110543583A (en) information processing method and apparatus, image device, and storage medium
EP4167193A1 (en) Vehicle data collection system and method of using
US10592687B2 (en) Method and system of enforcing privacy policies for mobile sensory devices
CN112686166B (en) Lost article detection and prompt method based on limited source data
CN103324950B (en) Human body reappearance detecting method and system based on online study
CN111428626B (en) Method and device for identifying moving object and storage medium
CN109960965A (en) Methods, devices and systems based on unmanned plane identification animal behavior

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant