CN107800998B - Positioning early warning reminding method, computer equipment and storage medium - Google Patents

Positioning early warning reminding method, computer equipment and storage medium Download PDF

Info

Publication number
CN107800998B
CN107800998B CN201710886619.0A CN201710886619A CN107800998B CN 107800998 B CN107800998 B CN 107800998B CN 201710886619 A CN201710886619 A CN 201710886619A CN 107800998 B CN107800998 B CN 107800998B
Authority
CN
China
Prior art keywords
terminal
video file
wearable
camera
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710886619.0A
Other languages
Chinese (zh)
Other versions
CN107800998A (en
Inventor
石猛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Niudingfeng Technology Co ltd
Original Assignee
Shenzhen Niudingfeng Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Niudingfeng Technology Co ltd filed Critical Shenzhen Niudingfeng Technology Co ltd
Priority to CN201710886619.0A priority Critical patent/CN107800998B/en
Publication of CN107800998A publication Critical patent/CN107800998A/en
Application granted granted Critical
Publication of CN107800998B publication Critical patent/CN107800998B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed circuit television systems, i.e. systems in which the signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network-specific arrangements or communication protocols supporting networked applications
    • H04L67/06Network-specific arrangements or communication protocols supporting networked applications adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network-specific arrangements or communication protocols supporting networked applications
    • H04L67/18Network-specific arrangements or communication protocols supporting networked applications in which the network application is adapted for the location of the user terminal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/232Devices for controlling television cameras, e.g. remote control ; Control of cameras comprising an electronic image sensor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/021Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences

Abstract

The invention relates to a positioning early warning reminding method, computer equipment and a storage medium, wherein the method comprises the following steps: acquiring geographic coordinates of wearable equipment and geographic coordinates of a terminal; calculating the distance between the wearable device and the terminal according to the geographic coordinates of the wearable device and the geographic coordinates of the terminal, and generating first prompt information when the distance reaches a first threshold value; sending the prompt information to wearable equipment so that the wearable equipment starts a camera according to the prompt information, and shooting the current environment through the camera to generate a corresponding video file; receiving a video file uploaded by wearable equipment; and sending the prompt information to the terminal, receiving an inquiry request sent by the terminal, acquiring a corresponding video file according to the inquiry request, and returning the video file to the terminal. By adopting the method, the track of the children or the old can be effectively tracked in time.

Description

Positioning early warning reminding method, computer equipment and storage medium
Technical Field
The invention relates to the technical field of computers, in particular to a positioning early warning reminding method, computer equipment and a storage medium.
Background
In recent years, many news reports the events that children are bought and sold and the old people get lost, and the safety problems of the children and the old people are emphasized. Along with the development of science and technology, for better assurance children and old man's personal safety, become an important mode of preventing wandering away through wearing wearable equipment such as intelligent wrist-watch or intelligent bracelet for children and old man. However, the smart watch or the smart bracelet and the like only provide a moving path for children or the old, and if the children or the old are lost, the children or the old cannot know the related lost conditions in time. Therefore, how to timely and effectively track the whereabouts of children or the elderly through wearable devices becomes a technical problem to be solved at present.
Disclosure of Invention
Therefore, it is necessary to provide a positioning early warning reminding method, a computer device and a storage medium, which can timely and effectively track the track of a child or an old person, in order to solve the above technical problems.
A positioning early warning reminding method comprises the following steps:
acquiring geographic coordinates of wearable equipment and geographic coordinates of a terminal;
calculating the distance between the wearable device and the terminal according to the geographic coordinates of the wearable device and the geographic coordinates of the terminal, and generating first prompt information when the distance reaches a first threshold value;
sending the prompt information to the wearable device, so that the wearable device starts a camera according to the prompt information, and shooting the current environment through the camera to generate a corresponding video file;
receiving a video file uploaded by the wearable device;
and sending the prompt information to the terminal, receiving an inquiry request sent by the terminal, acquiring a corresponding video file according to the inquiry request, and returning the video file to the terminal.
In one embodiment, the camera comprises a first camera and a second camera; the video file is a video file corresponding to the front environment and obtained by the wearable device through shooting by using a first camera; the method further comprises the following steps:
receiving a picture uploaded by the wearable device, wherein the picture is a picture corresponding to a rear environment and obtained by shooting a human face in the rear environment by the wearable device through a second camera;
and when receiving a query request sent by the terminal, returning the video file corresponding to the front environment and the picture corresponding to the rear environment to the terminal.
In one embodiment, the video files include a first video file and a second video file, and the method further includes:
acquiring geographic coordinates uploaded by the wearable device and the terminal in real time;
calculating the distance between the wearable equipment and the terminal in real time according to the geographic coordinates;
when the distance between the wearable device and the mobile terminal reaches a second threshold value, generating a second prompt message, and sending the second prompt message to the wearable device so that the wearable device uploads a first video file;
when the distance between the wearable device and the mobile terminal reaches a third threshold value, generating a third prompt message, and sending the third prompt message to the wearable device, so that the wearable device uploads a second video file.
In one embodiment, the method further comprises:
if the corresponding video file is not inquired by the inquiry request sent by the terminal, returning a prompt message of inquiry failure to the terminal;
when the video file is received, generating a push message corresponding to the video file;
and sending the push message to the terminal.
A positioning early warning reminding method comprises the following steps:
acquiring a geographical coordinate of a position, and uploading the geographical coordinate to a server so that the server calculates a distance between the mobile terminal and the server according to the geographical coordinate;
when the distance between the mobile terminal and the terminal reaches a first threshold value, receiving prompt information sent by the server;
starting a camera according to the prompt information, and shooting the environment through the camera to generate a corresponding video file;
and uploading the video file to the server, so that the server returns the video file to the mobile terminal when receiving a query request sent by the mobile terminal.
In one embodiment, the camera comprises a first camera and a second camera; the step of shooting the current environment through the camera to generate the corresponding video file comprises the following steps: shooting the front environment through the first camera to generate a video file corresponding to the front environment;
after the step of receiving the prompt message sent by the server, the method further comprises:
starting a second camera according to the prompt information, capturing a face image in the rear environment where the second camera is located, shooting the captured face image, and generating a picture corresponding to the rear environment;
and uploading the video file corresponding to the front environment and the picture corresponding to the rear environment to the server.
In one embodiment, the video files include a first video file and a second video file, and the method further includes:
acquiring a geographical coordinate in an environment, and uploading the geographical coordinate to the server so that the server calculates the distance between the server and the terminal in real time according to the geographical coordinate;
when the distance between the terminal and the terminal reaches a first value, generating a first video file, and uploading the first video file to the server;
continuously shooting the environment through the camera, and generating a second video file when the distance between the mobile terminal and the camera reaches a second value; and uploading the second video file to a server.
In one embodiment, the method further comprises:
acquiring activity time in the environment;
and generating a third video file according to a preset frequency and uploading the third video file to the server if the acquisition time exceeds a second threshold value before the distance between the mobile terminal and the terminal reaches a first value or before the distance between the mobile terminal and the terminal reaches a second value.
A computer device comprising a processor and a memory, the memory storing a computer program which, when executed by the processor, causes the processor to carry out the steps of the above method.
A computer-readable storage medium, in which a computer program is stored which, when executed by a processor, causes the processor to carry out the steps of the above-mentioned method.
According to the positioning early warning reminding method, the distance between the wearable device and the terminal is calculated by acquiring the geographic coordinates of the wearable device and the geographic coordinates of the terminal. When the distance reaches a first threshold value, generating first prompt information and sending the first prompt information to the wearable device and the terminal, so that a guardian user corresponding to the terminal can timely know that the wearable device user exceeds a preset safe distance and can know the geographical position of the wearable device user in real time. The wearable device starts a camera according to the prompt message, shoots the current environment, generates a corresponding video file and uploads the video file to the server. The terminal sends a query request to the server, the server acquires a corresponding video file according to the query request and returns the video file to the terminal, and a guardian user corresponding to the terminal can know the current activity range and the specific environment information of the wearable device, so that the track of children or old people can be effectively tracked in time.
Drawings
FIG. 1 is a diagram of a hardware environment for a location based alert method in one embodiment;
FIG. 2 is a flow diagram of a method for location based alert alerting in one embodiment;
FIG. 3 is a diagram illustrating an internal architecture of a server according to an embodiment;
FIG. 4 is a flow diagram of a method for location based alert alerting in one embodiment;
fig. 5 is a schematic diagram of an internal structure of a wearable device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The positioning early warning reminding method provided by the embodiment of the invention can be applied to the application scene shown in fig. 1. The wearable device 102 includes a hat, a wearable headset, a hairpin, and other smart wearable devices. The server 104 may be an independent server or a cluster server. The terminal 106 includes smart terminals such as a smart phone, a tablet computer, and a personal digital assistant. Wearable device 102 is connected to server 104 via a network, and terminal 106 is connected to server 104 via a network. Server 104 obtains the geographic coordinates of wearable device 102 and the geographic coordinates of terminal 106 in real time, and calculates the distance between wearable device 102 and terminal 106 in real time. When the distance reaches the first threshold value, the server 104 generates first prompt information and sends the first prompt information to the wearable device 102 and the terminal 106, and a guardian user corresponding to the terminal can timely know that the wearable device user exceeds a preset safe distance and know the geographical position of the wearable device user in real time. The wearable device 102 starts a camera according to the prompt message, shoots the current environment through the camera, generates a corresponding video file and uploads the video file to the server 104, and the server 104 receives the video file uploaded by the wearable device 102. The server 104 sends the first prompt message to the terminal 106, and the terminal 106 can display the geographical position and the movement track of the wearable device on a screen. The terminal 106 sends a query request to the server 104, and the server 104 obtains a corresponding video file according to the query request sent by the terminal 106 and returns the video file to the terminal 106. The video file is displayed on the screen of the terminal 106, and a guardian user corresponding to the terminal can know the current activity range and the specific environment information of the wearable device, so that the track of children or old people can be effectively tracked in time.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another.
In an embodiment, as shown in fig. 2, a positioning early warning reminding method is provided, which is described by taking an example that the method is applied to a server, and specifically includes the following steps:
step 202, obtaining geographic coordinates of the wearable device and geographic coordinates of the terminal.
Step 204, calculating the distance between the wearable device and the terminal according to the geographic coordinates of the wearable device and the geographic coordinates of the terminal, and generating first prompt information when the distance reaches a first threshold value.
In this embodiment, the server may be an independent server or a cluster server. Wearable equipment includes intelligent wearable equipment such as cap, wearing formula earphone, hairpin. The terminal comprises intelligent terminals such as a smart phone, a tablet computer and a personal digital assistant. The server acquires the geographic coordinates of the wearable device and the geographic coordinates of the terminal in real time. The geographic coordinates of the wearable device and the geographic coordinates of the terminal comprise longitude and latitude information and geographic position information of the wearable device and the terminal. And the server calculates the distance between the wearable device and the terminal according to the geographic coordinates of the wearable device and the geographic coordinates of the terminal. The first threshold value is a preset first safe distance value, whether the distance reaches a preset first threshold value or not is monitored in real time, when the distance reaches the first threshold value, the server generates first prompt information, the first prompt information is early warning prompt information exceeding the preset first distance threshold value for the first time, and therefore the server can immediately know and generate the early warning prompt information after the distance between the wearable device and the terminal exceeds the safe distance.
And step 206, sending the first prompt information to the wearable device, so that the wearable device starts a camera according to the first prompt information, and shooting the current environment through the camera to generate a corresponding video file.
And step 208, receiving the video file uploaded by the wearable device.
The server sends the first prompt information to the wearable device, and the wearable device starts the camera according to the first prompt information after receiving the first prompt information. Wherein the camera may be a miniature camera, e.g. a pinhole camera. Shooting the current environment through a camera on the wearable device to generate a video file with a preset format. The video file carries a time identifier and a geographic position identifier, can also name current geographic position information and time, and carries distance information between the wearable device and the terminal currently. The wearable device uploads the generated video file to the server, and the server receives and stores the video file uploaded by the wearable device.
And step 210, sending the first prompt information to the terminal, receiving a query request sent by the terminal, acquiring a corresponding video file according to the query request, and returning the video file to the terminal.
The server simultaneously sends the first prompt message to the terminal, the terminal loads an electronic map on the terminal after receiving the first prompt message, the geographic position coordinate is converted into the electronic map coordinate, and the current geographic position and the current moving path of the wearable device are displayed on the electronic map. The terminal sends a query request to the server by obtaining query operation of a user, the server obtains a corresponding video file according to the query request after receiving the query request sent by the terminal and returns the video file to the terminal, the terminal can check the video on a screen, and a guardian corresponding to the terminal can know the current activity range, the geographical position and the specific environmental condition of the user wearing the equipment at the first time.
In this embodiment, the server calculates the distance between the wearable device and the terminal by acquiring the geographic coordinates of the wearable device and the geographic coordinates of the terminal, and when the distance reaches a first threshold value, generates first prompt information and sends the first prompt information to the wearable device and the terminal, so that a guardian user corresponding to the terminal can timely know that the user of the wearable device exceeds a preset safe distance and can know the geographic position of the user of the wearable device in real time. The wearable device starts a camera according to the prompt message, shoots the current environment, generates a corresponding video file and uploads the video file to the server. The terminal sends a query request to the server, the server acquires a corresponding video file according to the query request and returns the video file to the terminal, and a guardian user corresponding to the terminal can timely know the current activity range and the specific environmental information of the wearable device, so that the track of children or old people can be timely and effectively tracked.
In one embodiment, the camera comprises a first camera and a second camera; the video file is a video file corresponding to the front environment and obtained by the wearable device through shooting by utilizing the first camera. The method further comprises the following steps: receiving a picture uploaded by the wearable device, wherein the picture is a picture corresponding to the rear environment and obtained by shooting a face in the rear environment by the wearable device through a second camera. And when an inquiry request sent by the terminal is received, returning the video file corresponding to the front environment and the picture corresponding to the rear environment to the terminal.
In this embodiment, the first camera and the second camera may be two separate cameras, or may be an integrated camera. The front environment includes environments directly in front of, left front of, and right front of the wearable device, and the rear environment includes environments directly behind, left rear of, and right rear of the wearable device. The wearable device utilizes the first camera to shoot the environment of the right front side, the left front side and the right front side of the wearable device, and generates a video file with preset duration according to preset frequency. The video file can be named according to the current geographic position information and time, and carries the current distance information between the wearable device and the terminal. The wearable device utilizes the second camera to shoot the human faces in the environment right behind, left behind and right behind the wearable device. Specifically, the wearable device acquires an image of a shot picture by using the second camera, extracts image features of the shot picture, identifies whether the current shot picture has a human face, and shoots and stores the current picture as a picture when the focal length of the identified human face reaches a preset focal length. The picture can be named according to the current geographic position information and time, and carries the current distance information between the wearable device and the terminal. And the wearable equipment compresses the generated video file and picture and uploads the compressed video file and picture to a server.
After receiving the query request sent by the terminal, the server acquires the video file corresponding to the front environment of the wearable device and the picture corresponding to the rear environment according to the query request, and returns the video file corresponding to the front environment of the wearable device and the picture corresponding to the rear environment to the terminal. The video file and the picture are displayed on the screen of the terminal, so that the guardian corresponding to the terminal can know the current specific position of the user of the wearable device at the first time, and can timely know the information of the relevant suspect through the video file and the picture in case of loss.
In one embodiment, when the server does not receive the video file uploaded by the wearable device within a preset time, the data uploading representing the video file and the picture fails. The server may send a retransmission alert message to the wearable device. The wearable device retransmits the corresponding video file and picture according to the retransmission prompt message, thereby ensuring that more detailed video information and picture information are provided for the terminal.
In one embodiment, the video files include a first video file and a second video file. The method further comprises the following steps: acquiring geographic coordinates uploaded by wearable equipment and a terminal in real time; calculating the distance between the wearable equipment and the terminal in real time according to the geographic coordinates; when the distance between the wearable device and the mobile terminal reaches a second threshold value, generating second prompt information, and sending the second prompt information to the wearable device so that the wearable device uploads the first video file; and when the distance between the wearable device and the mobile terminal reaches a third threshold value, generating third prompt information, and sending the third prompt information to the wearable device, so that the wearable device uploads a second video file.
Specifically, the server acquires geographic coordinates uploaded by the wearable device and the terminal in real time, and calculates the distance between the wearable device and the terminal in real time according to the geographic coordinates. A second threshold corresponding to the distance between the wearable device and the mobile terminal is preset, for example, the second threshold may be 100 meters. And when the distance between the wearable device and the mobile terminal reaches a second threshold value, the server generates second prompt information and sends the second prompt information to the wearable device. When the wearable device receives the second prompt message, a second video file is generated, and the first video file is uploaded to the server. The server simultaneously sends the second prompt information to the terminal, and a guardian user corresponding to the terminal can check the corresponding first video file according to the second prompt information after receiving the prompt information.
Further, in the process, the first camera of the wearable device continues to shoot the environment in front of the wearable device in real time, and a corresponding video file is generated. And a second camera of the wearable device continues to shoot the face image in the environment behind the wearable device in real time to generate a corresponding picture. The server acquires geographic coordinates uploaded by the wearable device and the terminal in real time, calculates the distance between the wearable device and the terminal in real time according to the geographic coordinates, and monitors the distance between the wearable device and the terminal in real time. And when the distance reaches a third threshold value, the server generates third prompt information and sends the third prompt information to the wearable device.
Furthermore, a plurality of thresholds corresponding to the distance between the wearable device and the mobile terminal can be preset, and corresponding video files can be uploaded when each threshold is reached. A corresponding video file is generated based on a distance node between the wearable device and the mobile terminal. For example, the preset threshold may be 100 meters and 200 meters, and a corresponding 100-meter video file and a corresponding 200-meter video file may be generated according to the preset threshold, so that a guardian corresponding to the terminal may know the activity range and the environment of the wearable device user according to the video files.
Further, if the distance between the wearable device and the terminal exceeds a preset first threshold value, the distance does not reach a second threshold value, but exceeds a preset time length, it indicates that the user of the wearable device may stay in a certain place, play or be entangled by other suspicious people. The first camera of the wearable device continues to shoot the environment in front of the wearable device, and generates a video file according to a preset frequency. And a second camera of the wearable device continues to shoot the face image in the environment behind the wearable device in real time to generate a corresponding picture. And uploading the generated video file and the generated picture to a server according to a preset frequency. In case of accident, it is ensured that the guardian of the terminal can be provided with relevant clues and detailed video information as well as picture information of the suspicious personnel.
In one embodiment, the method further comprises: if the corresponding video file is not inquired by the inquiry request sent by the terminal, a prompt message of inquiry failure is returned to the user terminal; when a video file is received, generating a push message corresponding to the video file; and sending the push message to the terminal.
Specifically, when the query request sent by the terminal does not query the corresponding video file, the server returns prompt information of failure query to the terminal. Further, when the server receives the video file uploaded by the wearable device, a corresponding push message is generated according to the video file. The push message comprises geographic position information and time information of the wearable device and distance information between the wearable device and the terminal, and the server sends the push message to the terminal. Therefore, the terminal can acquire the video file uploaded by the wearable device in real time and know the environment of the user of the wearable device more effectively.
In one embodiment, as shown in fig. 3, a schematic diagram of an internal structure of a server is provided, and the server includes a processor, a non-volatile storage medium, an internal memory, and a network interface, which are connected by a system bus. The server comprises a nonvolatile storage medium, an operating system, a database and a computer program, wherein the database is used for storing identification information of the wearable device, identification information of the terminal, corresponding relation between the wearable device and the terminal, video files and pictures uploaded by the wearable device and the like. The computer program, when executed, causes a processor to implement a method for locating an alert prompt. The processor of the server is configured to provide computing and control capabilities and is configured to perform a method of locating early warning alerts. The internal memory provides an environment for running the computer program in the nonvolatile storage medium. The network interface of the server is used for communicating with an external terminal through the internet, such as sending prompt information to the wearable device and the terminal. The server may be an independent server or a server cluster composed of a plurality of servers.
In an embodiment, as shown in fig. 4, a positioning early warning reminding method is provided, which is described by taking an example that the method is applied to a wearable device, and specifically includes the following steps:
and 402, acquiring the geographic coordinate of the position, and uploading the geographic coordinate to a server, so that the server calculates the distance between the server and the terminal according to the geographic coordinate.
In this embodiment, the server may be an independent server or a cluster server. Wearable equipment includes intelligent wearable equipment such as cap, wearing formula earphone, hairpin. The terminal comprises intelligent terminals such as a smart phone, a tablet computer and a personal digital assistant. The wearable device acquires the geographic coordinates of the current position in real time, the geographic coordinates comprise current longitude and latitude information and geographic position information, and the acquired geographic coordinates are uploaded to the server in real time. The server also obtains the geographic coordinates of the position of the terminal in real time, and the server calculates the distance between the wearable device and the terminal according to the geographic coordinates of the wearable device and the geographic coordinates of the terminal.
And step 404, receiving a first prompt message sent by the server when the distance between the terminal and the terminal reaches a first threshold value.
The server acquires the geographic coordinates of the wearable device and the geographic coordinates of the terminal in real time, and calculates the distance between the wearable device and the terminal in real time according to the geographic coordinates of the wearable device and the geographic coordinates of the terminal. The first threshold is a preset first distance threshold, whether the distance between the wearable device and the terminal reaches the preset first threshold is monitored in real time, and when the distance reaches the first threshold, the server generates first prompt information. Receiving first prompt information sent by a server, wherein the first prompt information is early warning prompt information exceeding a preset first distance threshold value for the first time, so that the distance between the wearable device and the terminal can be immediately known after the distance is generated to be a safe distance.
And 406, starting the camera according to the first prompt message, and shooting the environment through the camera to generate a corresponding video file.
And after receiving the first prompt message, starting the camera according to the first prompt message. Wherein the camera may be a miniature camera, e.g. a pinhole camera. The method comprises the steps that a camera on the wearable device shoots a current environment to generate a video file in a preset format, the video file carries a time identifier and a geographic position identifier, the video file can also be named according to current geographic position information and time, and carries distance information between the wearable device and a terminal.
And step 408, uploading the video file to the server, so that the server returns the video file to the terminal when receiving the query request sent by the terminal.
And uploading the generated video file to a server, and receiving the video file uploaded by the wearable device by the server. After the terminal receives the first prompt message, an electronic map on the terminal is loaded, the geographic position coordinates are converted into electronic map coordinates, and the current geographic position and the current moving path of the wearable device are displayed on the electronic map. The terminal sends a query request to the server by obtaining query operation of a user, the server obtains a corresponding video file according to the query request after receiving the query request sent by the terminal and returns the video file to the terminal, the terminal can check the video on a screen, and a guardian corresponding to the terminal can know the current activity range, the geographical position and the specific environmental condition of the user wearing the equipment at the first time.
In this embodiment, the wearable device acquires the current geographic coordinate in real time and uploads the acquired geographic coordinate to the server in real time, so that the server calculates the distance between the wearable device and the terminal according to the uploaded geographic coordinate. When the distance between the server and the terminal reaches a first threshold value, the server generates first prompt information and sends the first prompt information to the wearable device and the terminal, so that a guardian user corresponding to the terminal can timely know that the wearable device user exceeds a preset safe distance, and can know the geographical position of the wearable device user in real time. The wearable device starts the camera after receiving the prompt message, shoots the current environment, generates a corresponding video file and uploads the video file to the server. The server acquires the corresponding video file after receiving the query request of the terminal and returns the video file to the terminal, so that a guardian user corresponding to the terminal can timely acquire the current activity range and the specific environmental information of the wearable device, and the track of children or old people can be timely and effectively tracked.
In one embodiment, the camera comprises a first camera and a second camera; the method comprises the following steps of shooting the current environment through a camera, and generating a corresponding video file, wherein the steps comprise: the first camera shoots the front environment to generate a video file corresponding to the front environment. After the step of receiving the first prompt message sent by the server, the method further comprises the following steps: and starting the second camera according to the prompt information, capturing the face image in the rear environment where the second camera is located, shooting the captured face image, and generating a picture corresponding to the rear environment. And uploading the video file corresponding to the front environment and the picture corresponding to the rear environment to a server.
In this embodiment, the first camera and the second camera may be two separate cameras, or may be an integrated camera. The front environment includes environments directly in front of, left front of, and right front of the wearable device, and the rear environment includes environments directly behind, left rear of, and right rear of the wearable device. The wearable device utilizes the first camera to shoot the environment of the right front side, the left front side and the right front side of the wearable device, and generates a video file with preset duration according to preset frequency. The video file can also be named according to the current geographic position information and time, and carries the current distance information between the wearable device and the terminal.
After the first prompt message sent by the server is received, the wearable device utilizes the second camera to shoot the human faces in the environments right behind, left behind and right behind the wearable device. Specifically, the wearable device acquires an image of a shot picture by using the second camera, extracts image features of the shot picture, identifies whether the current shot picture has a human face, and shoots and stores the current picture as a picture when the focal length of the identified human face reaches a preset focal length. The picture can be named according to the current geographic position information and time, and carries the current distance information between the wearable device and the terminal. And the wearable equipment compresses the generated video file and picture and uploads the compressed video file and picture to a server. According to the geographic position of the wearable device and the video information of the surrounding environment and the captured face picture sent by the wearable device, a guardian corresponding to the terminal can know the specific position of the user of the wearable device at the current position at the first time, and in case of a lost situation, the information of a relevant suspect can be timely known through the video file and the picture.
In one embodiment, the upload of data representing video files and pictures fails. The server may send a retransmission alert message to the wearable device. The wearable device retransmits the corresponding video file and picture according to the retransmission prompt message, thereby ensuring that more detailed video information and picture information are provided for the terminal.
In one embodiment, the video files include a first video file and a second video file. The method further comprises the following steps: acquiring a geographical coordinate in the environment, and uploading the geographical coordinate to a server so that the server calculates the distance between the server and the terminal in real time according to the geographical coordinate; when the distance between the terminal and the terminal reaches a first value, generating a first video file, and uploading the first video file to a server; and continuously shooting the environment through the camera, generating a second video file when the distance between the camera and the terminal reaches a second value, and uploading the second video file to the server.
Specifically, the first video file is a first video file generated when the wearable device receives the first prompt message and the distance between the wearable device and the terminal reaches a first value. The second video file is generated when the distance between the wearable device and the terminal reaches a second value after the wearable device receives the second prompt message. The wearable device acquires the geographic coordinates of the current environment in real time and uploads the acquired geographic coordinates to the server in real time, so that the server calculates the distance between the wearable device and the terminal in real time according to the geographic coordinates. The wearable device also obtains the distance between the wearable device and the terminal in real time, and when the distance reaches a first value, the generated first video file is uploaded to the server.
Further, in the process, the first camera of the wearable device continues to shoot the environment in front of the wearable device in real time, and a corresponding video file is generated. And a second camera of the wearable device continues to shoot the face image in the environment behind the wearable device in real time to generate a corresponding picture. The wearable device continues to acquire the current geographic coordinates in real time and uploads the current geographic coordinates to the server in real time, and the server further calculates the distance between the wearable device and the terminal. And when the distance reaches a second value, the wearable device uploads the generated second video file and picture to the server.
Furthermore, a plurality of thresholds corresponding to the distance between the wearable device and the mobile terminal can be preset, and corresponding video files can be uploaded when each threshold is reached. A corresponding video file is generated based on a distance node between the wearable device and the mobile terminal. For example, the preset threshold may be 100 meters and 200 meters, and a corresponding 100-meter video file and 200-meter video file may be generated according to the preset threshold, so that a guardian corresponding to the terminal may know an activity range and an environment of the wearable device user according to the video files.
Further, if the distance between the wearable device and the terminal exceeds a preset first value, the distance does not reach a second value, but exceeds a preset time length, it indicates that the user of the wearable device may stay in a certain place, play or be entangled by other suspicious people. The first camera of the wearable device continues to shoot the environment in front of the wearable device, and generates a corresponding video file according to a preset frequency. And a second camera of the wearable device continues to shoot the face image in the environment behind the wearable device in real time to generate a corresponding picture. And uploading the generated video file and the generated picture to a server according to a preset frequency. In case of accident, it is ensured that the guardian of the terminal can be provided with relevant clues and detailed video information as well as picture information of the suspicious personnel.
In one embodiment, the method further comprises: acquiring the activity duration in the environment; and before the distance between the terminal and the terminal reaches a first value or before the distance between the terminal and the terminal reaches a second value, if the acquisition duration exceeds a preset threshold, generating a third video file according to a preset frequency, and uploading the third video file to the server.
The activity duration of the wearable device in the current environment is obtained, when the distance between the wearable device and the terminal does not reach a first value or a second value and the activity duration of the current environment exceeds a preset threshold value, a third video file is generated according to a preset frequency, and therefore the wearable device can be guaranteed to continuously generate the video file within the preset frequency. Wearable equipment uploads the generated third video file to the server, and the video file generated by the wearable equipment can be a small video file with preset duration, so that the transmission time cannot be prolonged due to the fact that the video file is too large, and the transmission rate of the video file is further improved.
In one embodiment, as shown in fig. 5, a schematic diagram of the internal structure of a wearable device is provided. The wearable device comprises a processor, a nonvolatile storage medium, an internal memory, a network interface and a camera device which are connected through a system bus. The wearable device comprises a non-volatile storage medium, an operating system and a computer program, wherein the operating system and the computer program are stored in the non-volatile storage medium of the wearable device, and the computer program is used for realizing a method for positioning early warning reminding. The processor of the wearable device is used to provide computing and control capabilities, and is configured to perform a method of locating an early warning alert. The internal memory provides an environment for running the computer program in the nonvolatile storage medium. The network interface is used for accessing the internet to communicate with the application server, such as acquiring geographic coordinates and uploading video files and pictures. The camera device of the computer apparatus may be a micro camera including a pinhole camera and the like. The computer equipment can be intelligent wearable equipment such as a hat, a wearable earphone and a hairpin.
In one embodiment, a computer device is provided, which may be a server. The computer device comprises a processor and a memory, said memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of: acquiring geographic coordinates of wearable equipment and geographic coordinates of a terminal; calculating the distance between the wearable device and the terminal according to the geographic coordinates of the wearable device and the geographic coordinates of the terminal, and generating first prompt information when the distance reaches a first threshold value; sending the first prompt information to the wearable device, so that the wearable device starts a camera according to the first prompt information, and shooting the current environment through the camera to generate a corresponding video file; receiving a video file uploaded by wearable equipment; and sending the first prompt information to the terminal, receiving an inquiry request sent by the terminal, acquiring a corresponding video file according to the inquiry request, and returning the video file to the terminal.
In one embodiment, the camera comprises a first camera and a second camera; the video file is a video file corresponding to the front environment and obtained by the wearable device through shooting by using the first camera; the processor, when executing the computer program, further performs the steps of: receiving a picture uploaded by the wearable device, wherein the picture is a picture corresponding to the rear environment and obtained by shooting a human face in the rear environment by the wearable device through a second camera; and when an inquiry request sent by the terminal is received, returning the video file corresponding to the front environment and the picture corresponding to the rear environment to the terminal.
In one embodiment, the video files include a first video file and a second video file, and the processor when executing the computer program further performs the steps of: acquiring geographic coordinates uploaded by wearable equipment and a terminal in real time; calculating the distance between the wearable equipment and the terminal in real time according to the geographic coordinates; when the distance between the wearable device and the mobile terminal reaches a second threshold value, generating second prompt information, and sending the second prompt information to the wearable device so that the wearable device uploads the first video file; and when the distance between the wearable device and the mobile terminal reaches a third threshold value, generating third prompt information, and sending the third prompt information to the wearable device, so that the wearable device uploads a second video file.
In one embodiment, the processor, when executing the computer program, further performs the steps of: if the corresponding video file is not inquired by the inquiry request sent by the terminal, a prompt message of inquiry failure is returned to the user terminal; when a video file is received, generating a push message corresponding to the video file; and sending the push message to the terminal.
In one embodiment, a computer device is provided, which may be a wearable device. The computer device comprises a processor and a memory, said memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of: acquiring a geographical coordinate of a position, and uploading the geographical coordinate to a server so that the server calculates a distance between the server and the terminal according to the geographical coordinate; when the distance between the terminal and the terminal reaches a first threshold value, receiving first prompt information sent by a server; starting a camera according to the first prompt information, shooting the environment in which the camera is located through the camera, and generating a corresponding video file; and uploading the video file to a server so that the server returns the video file to the terminal when receiving the query request sent by the terminal.
In one embodiment, the camera comprises a first camera and a second camera; the processor, when executing the computer program, further performs the steps of: shooting a front environment through a first camera to generate a video file corresponding to the front environment; after the step of receiving the first prompt message sent by the server, the method further comprises the following steps: starting a second camera according to the prompt information, capturing a face image in the rear environment where the second camera is located, shooting the captured face image, and generating a picture corresponding to the rear environment; and uploading the video file corresponding to the front environment and the picture corresponding to the rear environment to a server.
In one embodiment, the video files include a first video file and a second video file, and the processor when executing the computer program further performs the steps of: acquiring a geographical coordinate in the environment, and uploading the geographical coordinate to a server so that the server calculates the distance between the server and the terminal in real time according to the geographical coordinate; when the distance between the terminal and the terminal reaches a first value, generating a first video file, and uploading the first video file to a server; and continuously shooting the environment through the camera, generating a second video file when the distance between the camera and the terminal reaches a second value, and uploading the second video file to the server.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring the activity duration in the environment; and before the distance between the terminal and the terminal reaches a first value or before the distance between the terminal and the terminal reaches a second value, if the acquisition duration exceeds a preset threshold, generating a third video file according to a preset frequency, and uploading the third video file to the server.
In one embodiment, a computer readable storage medium is provided, the computer readable storage medium storing a computer program that, when executed by a processor, causes the processor to perform the steps of: acquiring geographic coordinates of wearable equipment and geographic coordinates of a terminal; calculating the distance between the wearable device and the terminal according to the geographic coordinates of the wearable device and the geographic coordinates of the terminal, and generating first prompt information when the distance reaches a first threshold value; sending the first prompt information to the wearable device, so that the wearable device starts a camera according to the first prompt information, and shooting the current environment through the camera to generate a corresponding video file; receiving a video file uploaded by wearable equipment; and sending the first prompt information to the terminal, receiving an inquiry request sent by the terminal, acquiring a corresponding video file according to the inquiry request, and returning the video file to the terminal.
In one embodiment, the camera comprises a first camera and a second camera; the video file is a video file corresponding to the front environment and obtained by the wearable device through shooting by using the first camera; the computer program when executed by the processor further performs the steps of: receiving a picture uploaded by the wearable device, wherein the picture is a picture corresponding to the rear environment and obtained by shooting a human face in the rear environment by the wearable device through a second camera; and when an inquiry request sent by the terminal is received, returning the video file corresponding to the front environment and the picture corresponding to the rear environment to the terminal.
In one embodiment, the video files comprise a first video file and a second video file, and the computer program when executed by the processor further performs the steps of: acquiring geographic coordinates uploaded by wearable equipment and a terminal in real time; calculating the distance between the wearable equipment and the terminal in real time according to the geographic coordinates; when the distance between the wearable device and the mobile terminal reaches a second threshold value, generating second prompt information, and sending the second prompt information to the wearable device so that the wearable device uploads the first video file; and when the distance between the wearable device and the mobile terminal reaches a third threshold value, generating third prompt information, and sending the third prompt information to the wearable device, so that the wearable device uploads a second video file.
In one embodiment, the computer program when executed by the processor further performs the steps of: if the corresponding video file is not inquired by the inquiry request sent by the terminal, a prompt message of inquiry failure is returned to the user terminal; when a video file is received, generating a push message corresponding to the video file; and sending the push message to the terminal.
In one embodiment, a computer readable storage medium is provided, the computer readable storage medium storing a computer program that, when executed by a processor, causes the processor to perform the steps of: acquiring a geographical coordinate of a position, and uploading the geographical coordinate to a server so that the server calculates a distance between the server and the terminal according to the geographical coordinate; when the distance between the terminal and the terminal reaches a first threshold value, receiving first prompt information sent by a server; starting a camera according to the first prompt information, shooting the environment in which the camera is located through the camera, and generating a corresponding video file; and uploading the video file to a server so that the server returns the video file to the terminal when receiving the query request sent by the terminal.
In one embodiment, the camera comprises a first camera and a second camera; the computer program when executed by the processor further performs the steps of: shooting a front environment through a first camera to generate a video file corresponding to the front environment; after the step of receiving the first prompt message sent by the server, the method further comprises the following steps: starting a second camera according to the prompt information, capturing a face image in the rear environment where the second camera is located, shooting the captured face image, and generating a picture corresponding to the rear environment; and uploading the video file corresponding to the front environment and the picture corresponding to the rear environment to a server.
In one embodiment, the video files comprise a first video file and a second video file, and the computer program when executed by the processor further performs the steps of: acquiring a geographical coordinate in the environment, and uploading the geographical coordinate to a server so that the server calculates the distance between the server and the terminal in real time according to the geographical coordinate; when the distance between the terminal and the terminal reaches a first value, generating a first video file, and uploading the first video file to a server; and continuously shooting the environment through the camera, generating a second video file when the distance between the camera and the terminal reaches a second value, and uploading the second video file to the server.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring the activity duration in the environment; and before the distance between the terminal and the terminal reaches a first value or before the distance between the terminal and the terminal reaches a second value, if the acquisition duration exceeds a preset threshold, generating a third video file according to a preset frequency, and uploading the third video file to the server.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the inventive concept, which falls within the scope of the present invention. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A positioning early warning reminding method comprises the following steps:
acquiring geographic coordinates of wearable equipment and geographic coordinates of a terminal;
calculating the distance between the wearable device and the terminal according to the geographic coordinates of the wearable device and the geographic coordinates of the terminal, and generating first prompt information when the distance reaches a first threshold value;
sending the first prompt information to the wearable device, so that the wearable device starts a camera according to the first prompt information, shoots the current environment through the camera, generates a picture including the characteristics of a human face image, and continuously generates a video file according to the distance and the activity duration and the preset frequency; the camera comprises a first camera and a second camera, the video file is acquired by the wearable device according to a preset frequency and shot by the first camera, and the picture is a picture shot by the wearable device by the second camera and comprises human face image characteristics;
receiving a picture comprising facial image features uploaded by the wearable device;
receiving a first video file uploaded by the wearable device according to the first threshold; when the distance reaches a second threshold value, receiving a second video file uploaded by the wearable device according to the second threshold value;
and sending the first prompt information to the terminal, receiving a query request sent by the terminal, acquiring a corresponding video file according to the query request, and returning the video file to the terminal.
2. The method of claim 1, wherein the camera comprises a first camera and a second camera; the video file is a video file corresponding to the front environment and obtained by the wearable device through shooting by using a first camera; the method further comprises the following steps:
receiving a picture uploaded by the wearable device, wherein the picture is a picture corresponding to a rear environment and obtained by shooting a human face in the rear environment by the wearable device through a second camera;
and when receiving a query request sent by the terminal, returning the video file corresponding to the front environment and the picture corresponding to the rear environment to the terminal.
3. The method of claim 1, further comprising:
when the distance between the wearable device and the terminal reaches the first threshold and does not reach the second threshold, acquiring the activity duration of the wearable device in the environment after the distance reaches the first threshold;
and when the activity duration exceeds a preset threshold, acquiring pictures or video files uploaded by the wearable equipment according to a preset frequency.
4. The method of claim 3, further comprising:
if the corresponding video file is not inquired by the inquiry request sent by the terminal, returning a prompt message of inquiry failure to the terminal;
when the video file is received, generating a push message corresponding to the video file;
and sending the push message to the terminal.
5. A positioning early warning reminding method comprises the following steps:
acquiring a geographical coordinate of a position, and uploading the geographical coordinate to a server so that the server calculates a distance between the server and a terminal according to the geographical coordinate;
when the distance between the terminal and the terminal reaches a first threshold value, receiving first prompt information sent by the server;
starting a camera according to the first prompt information, shooting the environment through the camera to generate a picture including the characteristics of a human face image, and continuously generating a video file according to the distance and the activity duration and the preset frequency; the camera comprises a first camera and a second camera, the video file is acquired by the wearable device according to a preset frequency and shot by the first camera, and the picture is a picture shot by the wearable device and comprises human face image characteristics and shot by the second camera;
uploading the picture comprising the facial image characteristics to the server;
uploading a first video file generated according to the first threshold value to the server; continuously shooting the environment through the camera, and generating a second video file and uploading the second video file to the server when the distance reaches a second threshold value; so that the server returns the video file to the terminal when receiving the query request sent by the terminal.
6. The method of claim 5, wherein the camera comprises a first camera and a second camera; shooting the front environment through the first camera to generate a video file corresponding to the front environment;
after the step of receiving the first prompt message sent by the server, the method further includes:
starting a second camera according to the prompt information, capturing a face image in the rear environment where the second camera is located, shooting the captured face image, and generating a picture corresponding to the rear environment;
and uploading the video file corresponding to the front environment and the picture corresponding to the rear environment to the server.
7. The method of claim 5, wherein the video file comprises a first video file and a second video file, the method further comprising:
acquiring a geographical coordinate in an environment, and uploading the geographical coordinate to the server so that the server calculates the distance between the server and the terminal in real time according to the geographical coordinate;
when the distance between the terminal and the terminal reaches a first threshold value, generating a first video file, and uploading the first video file to the server;
and continuously shooting the environment through the camera, generating a second video file when the distance between the mobile terminal and the camera reaches a second threshold value, and uploading the second video file to a server.
8. The method of claim 7, further comprising:
acquiring the activity duration in the environment;
and generating a third video file according to a preset frequency and uploading the third video file to the server if the activity duration exceeds a preset threshold before the distance between the terminal and the terminal reaches a first threshold or before the distance between the terminal and the terminal reaches a second threshold.
9. A computer device comprising a processor and a memory, the memory storing a computer program that, when executed by the processor, causes the processor to perform the steps of the method of any one of claims 1 to 4 or 5 to 8.
10. A computer-readable storage medium, storing a computer program which, when executed by a processor, causes the processor to perform the steps of the method of any one of claims 1 to 4 or 5 to 8.
CN201710886619.0A 2017-09-27 2017-09-27 Positioning early warning reminding method, computer equipment and storage medium Active CN107800998B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710886619.0A CN107800998B (en) 2017-09-27 2017-09-27 Positioning early warning reminding method, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710886619.0A CN107800998B (en) 2017-09-27 2017-09-27 Positioning early warning reminding method, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN107800998A CN107800998A (en) 2018-03-13
CN107800998B true CN107800998B (en) 2020-07-21

Family

ID=61532180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710886619.0A Active CN107800998B (en) 2017-09-27 2017-09-27 Positioning early warning reminding method, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN107800998B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108632391A (en) * 2018-05-21 2018-10-09 北京小米移动软件有限公司 information sharing method and device
CN111028475A (en) * 2019-12-31 2020-04-17 深圳市海瑞泰克电子有限公司 Intelligent help-seeking ring

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002042084A (en) * 2000-07-28 2002-02-08 Sony Corp Attachment to portable device, portable device, and method for detecting separation state
CN105096521A (en) * 2015-04-30 2015-11-25 广东欧珀移动通信有限公司 Safety pre-warning method and related device
CN105516480A (en) * 2015-11-30 2016-04-20 芜湖美智空调设备有限公司 Method and system for preventing target person from being lost
CN105680892A (en) * 2016-03-31 2016-06-15 北京小米移动软件有限公司 Correlation article reminding method and device
CN105869348A (en) * 2016-04-15 2016-08-17 北京小米移动软件有限公司 Alarming method, alarming device and monitoring equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002042084A (en) * 2000-07-28 2002-02-08 Sony Corp Attachment to portable device, portable device, and method for detecting separation state
CN105096521A (en) * 2015-04-30 2015-11-25 广东欧珀移动通信有限公司 Safety pre-warning method and related device
CN105516480A (en) * 2015-11-30 2016-04-20 芜湖美智空调设备有限公司 Method and system for preventing target person from being lost
CN105680892A (en) * 2016-03-31 2016-06-15 北京小米移动软件有限公司 Correlation article reminding method and device
CN105869348A (en) * 2016-04-15 2016-08-17 北京小米移动软件有限公司 Alarming method, alarming device and monitoring equipment

Also Published As

Publication number Publication date
CN107800998A (en) 2018-03-13

Similar Documents

Publication Publication Date Title
US20200211162A1 (en) Automated Obscurity For Digital Imaging
CN107592498B (en) Cell management method based on intelligent camera and related equipment
US9788065B2 (en) Methods and devices for providing a video
CN107800998B (en) Positioning early warning reminding method, computer equipment and storage medium
JP2017505497A (en) Information push delivery method and apparatus
GB2495699A (en) Sending activity information and location information from at least one mobile device to identify points of interest
CN104050774A (en) Worn type electronic watch ring piece with video function
US10282619B2 (en) Processing apparatus, storage medium, and control method
US10182770B2 (en) Smart devices that capture images and sensed signals
Sogi et al. SMARISA: a raspberry pi based smart ring for women safety using IoT
KR101584983B1 (en) system, method, computer program and server for finding missing object based on beacon
Prashanth et al. Research and development of a mobile based women safety application with real-time database and data-stream network
US20140176329A1 (en) System for emergency rescue
KR20150045465A (en) Method for provisioning a person with information associated with an event
WO2017156793A1 (en) Geographic location-based video processing method
Sen et al. ProTecht–Implementation of an IoT based 3–Way Women Safety Device
JP2015233204A (en) Image recording device and image recording method
US20200074839A1 (en) Situational awareness platform, methods, and devices
KR20190127101A (en) Security service system and method based on cloud
KR101457529B1 (en) Video Live Broadcasting System Based on Spatial Information Using Smart Phone and Operating Method thereof
Rashmi et al. Video surveillance system and facility to access Pc from remote areas using smart phone
US20170162032A1 (en) Personal security
CN106656725B (en) Intelligent terminal, server and information updating system
KR101651931B1 (en) Life Log Device Comprising Record-Preventing Function and Method thereof
KR101483447B1 (en) Information processing system and method for processing information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant