CN115346333A - Information prompting method and device, AR glasses, cloud server and storage medium - Google Patents

Information prompting method and device, AR glasses, cloud server and storage medium Download PDF

Info

Publication number
CN115346333A
CN115346333A CN202210820867.6A CN202210820867A CN115346333A CN 115346333 A CN115346333 A CN 115346333A CN 202210820867 A CN202210820867 A CN 202210820867A CN 115346333 A CN115346333 A CN 115346333A
Authority
CN
China
Prior art keywords
information
target object
human body
real
time environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210820867.6A
Other languages
Chinese (zh)
Inventor
曾亮
涂贤玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing SoundAI Technology Co Ltd
Original Assignee
Beijing SoundAI Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing SoundAI Technology Co Ltd filed Critical Beijing SoundAI Technology Co Ltd
Priority to CN202210820867.6A priority Critical patent/CN115346333A/en
Publication of CN115346333A publication Critical patent/CN115346333A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

The invention provides an information prompting method, an information prompting device, AR glasses, a cloud server and a storage medium, wherein the method comprises the following steps: the AR glasses collect real-time environment images and upload the real-time environment images to a cloud server; receiving and displaying AR information sent by the cloud server; the AR information is used for representing a target object, the target object is determined based on a plurality of continuous real-time environment images, and the target object meets the following preset conditions: and the time interval between the first moment and the current moment exceeds a preset threshold, wherein the first moment is the moment corresponding to the last occurrence of the target object in the continuous real-time environment images. According to the AR information, the target object can be quickly determined, the number of people does not need to be continuously counted in the modes of counting number, counting number and the like to find whether the lost people are lost or not, and therefore the efficiency of finding the lost people can be improved.

Description

Information prompting method and device, AR glasses, cloud server and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an information prompting method, AR glasses, a cloud server and a storage medium.
Background
In daily life of people, things losing are common, events that children and old people lose are also happened at times, and the problem that people are troubled all the time is to prevent losing. For example, when a school organizes activities outside the school, the teacher needs to constantly pay attention to prevent the students from being lost.
At present, in order to find whether a person is lost, the number of people needs to be counted continuously by means of counting points, counting number and the like. In this way, the efficiency of finding lost personnel is greatly reduced.
Disclosure of Invention
The invention provides an information prompting method, an information prompting device, AR glasses, a cloud server and a storage medium, which are used for solving the defect that in the prior art, in order to find out whether a person is lost, the number of the person needs to be continuously counted in the modes of counting, counting and the like, so that the efficiency of searching the lost person is greatly reduced, the target object is quickly determined through AR information, and the efficiency of searching the lost person can be improved.
The invention provides an information prompting method, which is applied to AR glasses and comprises the following steps:
acquiring a real-time environment image, and uploading the real-time environment image to a cloud server;
receiving and displaying AR information sent by the cloud server; the AR information is used for representing a target object, the target object is determined based on a plurality of continuous real-time environment images, and the target object meets the following preset conditions: and the time interval between the first moment and the current moment exceeds a preset threshold, wherein the first moment is the moment corresponding to the last occurrence of the target object in the continuous real-time environment images.
According to an information prompting method provided by the present invention, the AR information includes: human body features and/or basic information of the target object, the human body features including at least one of: the human face image, the human face characteristic, the gait characteristic and the body shape characteristic, and the basic information comprises at least one of the following items: name, contact address, name of the emergency contact, and contact address of the emergency contact.
According to an information prompting method provided by the present invention, the AR information further includes: current location information of the target object; the current position information of the target object is obtained based on the intelligent wearable device worn by the target object.
According to an information prompting method provided by the present invention, the AR information further includes: a target path; the target path is a path from current position information of the AR glasses to current position information of the target object.
According to the information prompting method provided by the invention, the method further comprises the following steps:
receiving and displaying the number of the objects sent by the cloud server; the number of objects is the number of objects appearing in the real-time environment image.
According to the information prompting method provided by the invention, the method further comprises the following steps:
acquiring a human body image of each object and acquiring input basic information of the object;
uploading the human body image and the basic information of the object to the cloud server, so that the cloud server extracts the human body features of the object based on the human body image of the object and stores the human body features and the basic information of the object.
The invention also provides an information prompting method, which is applied to the cloud server and comprises the following steps:
acquiring a real-time environment image acquired by AR glasses;
determining a time corresponding to the last occurrence of each object in a plurality of continuous real-time environment images based on the plurality of continuous real-time environment images;
determining the object as a target object under the condition that the interval between the moment and the current moment is greater than a preset threshold value;
and generating AR information used for representing the target object, and sending the AR information to the AR glasses.
According to an information prompting method provided by the invention, for each object, whether the object appears in the real-time environment image is judged through the following steps:
comparing the characteristics of the human body corresponding to each object in the real-time environment image based on the preset human body characteristics of the object;
determining that the object appears in the real-time environment image under the condition that the human body features are compared and consistent;
and under the condition that the human body characteristic comparison is inconsistent, determining that the object does not appear in the real-time environment image.
According to the information prompting method provided by the invention, the human body characteristics comprise at least one of the following items: the human face image, the human face characteristic, the gait characteristic and the body shape characteristic.
According to an information prompting method provided by the present invention, the generating of the AR information for characterizing the target object includes:
searching basic information of the target object from pre-stored basic information of a plurality of objects;
and generating AR information based on the human body characteristics and/or the basic information of the target object.
According to an information prompting method provided by the invention, the basic information comprises at least one of the following items: name, contact address, name of the emergency contact, and contact address of the emergency contact.
According to an information prompting method provided by the present invention, the generating AR information based on the human body features and/or the basic information of the target object includes:
acquiring current position information of the target object, recorded by intelligent wearable equipment worn by the target object;
and generating AR information based on the human body characteristics and/or basic information of the target object and the current position information of the target object.
According to an information prompting method provided by the present invention, the generating AR information based on the human body features and/or basic information of the target object and the current position information of the target object includes:
acquiring current position information of the AR glasses;
determining a path from the current position information of the AR glasses to the current position information of the target object based on the map network data;
generating AR information based on the human body features and/or basic information of the target object, the current position information of the target object, and the path.
According to an information prompting method provided by the invention, the method further comprises the following steps:
counting the number of objects appearing in the real-time environment image;
sending the number of objects appearing in the real-time environment image to the AR glasses.
According to the information prompting method provided by the invention, the method further comprises the following steps:
acquiring human body images and basic information of the objects uploaded by the AR glasses aiming at each object;
extracting human body features of the object based on the human body image of the object;
storing the physical characteristics and the basic information of the object.
According to an information prompting method provided by the invention, the method further comprises the following steps:
acquiring a monitoring video of a current environment;
performing video processing on the monitoring video of the current environment to obtain first human body characteristics of a plurality of objects;
supplementing and/or updating the stored physical characteristics of the subject based on the first physical characteristics of the subject.
The invention also provides an information prompting device, which comprises:
the acquisition and uploading module is used for acquiring a real-time environment image and uploading the real-time environment image to the cloud server;
the receiving and displaying module is used for receiving and displaying the AR information sent by the cloud server; the AR information is used for representing a target object, the target object is determined based on a plurality of continuous real-time environment images, and the target object meets the following preset conditions: and the time interval between the first moment and the current moment exceeds a preset threshold, wherein the first moment is the moment corresponding to the last occurrence of the target object in the continuous real-time environment images.
The invention also provides an information prompting device, which comprises:
the image acquisition module is used for acquiring a real-time environment image acquired by the AR glasses;
the time determining module is used for determining the time corresponding to the last occurrence of each object in the continuous real-time environment images on the basis of the continuous real-time environment images;
the object determination module is used for determining the object as a target object under the condition that the interval between the moment and the current moment is greater than a preset threshold value;
and the information generating and sending module is used for generating AR information used for representing the target object and sending the AR information to the AR glasses.
The invention also provides AR glasses, which comprise a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the program to realize the information prompting method of any one of the AR glasses.
The invention also provides a cloud server, which comprises a memory, a processor and a computer program which is stored on the memory and can be run on the processor, wherein when the processor executes the program, the information prompting method of any one of the cloud server sides is realized.
The present invention also provides a non-transitory computer-readable storage medium having stored thereon a computer program that, when executed by a processor, implements the information presentation method described in any of the above AR glasses sides, or the information presentation method described in any of the cloud server sides.
The present invention also provides a computer program product including a computer program that, when executed by a processor, implements the information presentation method described in any of the above AR glasses sides, or the information presentation method described in any of the cloud server sides.
According to the information prompting method and device, the AR glasses, the cloud server and the storage medium, the AR glasses collect real-time environment images and upload the real-time environment images to the cloud server, the target object is considered to be lost personnel because the time interval between the moment corresponding to the last occurrence of the target object in a plurality of continuous real-time environment images and the current moment exceeds the preset threshold value, the AR information used for representing the target object (namely the lost personnel) and sent by the cloud server is received and displayed, the target object can be rapidly determined through the AR information, the number of people is not required to be counted continuously in a counting mode, the number of people is not required to be counted to find whether the lost personnel exist, and therefore the efficiency of finding the lost personnel can be improved.
Drawings
In order to more clearly illustrate the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic flow chart of an information prompting method provided by the present invention;
FIG. 2 is a second schematic flow chart of an information prompting method provided by the present invention;
FIG. 3 is a schematic view of an interaction flow of an information prompting method provided by the present invention;
FIG. 4 is a schematic structural diagram of an information prompt device according to the present invention;
FIG. 5 is a second schematic structural diagram of an information prompting device provided by the present invention;
FIG. 6 is a schematic diagram of the structure of AR glasses provided by the present invention;
fig. 7 is a schematic structural diagram of a cloud server provided in the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The information prompting method of the present invention is described below with reference to fig. 1 to 3.
Referring to fig. 1, fig. 1 is a schematic flow chart of an information prompting method provided by the present invention. As shown in fig. 1, the information prompting method provided by the present invention is applied to Augmented Reality (AR) glasses, and may include the following steps:
step 101, acquiring a real-time environment image, and uploading the real-time environment image to a cloud server.
102, receiving and displaying AR information sent by a cloud server; the AR information is used for representing a target object, the target object is determined based on a plurality of continuous real-time images, and the target object meets the following preset conditions: and the time interval between the first moment and the current moment exceeds a preset threshold, and the first moment is the moment corresponding to the last occurrence of the target object in the continuous real-time environment images.
In step 101, the real-time environment image is an image of the current real environment acquired by the AR glasses in real time, and the AR glasses continuously acquire the real-time environment image and upload the real-time environment image to the cloud server.
In implementations, the supervisor can wear the AR glasses and continuously look around the surroundings. In the process that the supervision personnel look around, the AR glasses constantly gather real-time environment images to upload the real-time environment images to the cloud server.
In step 102, the continuous plurality of real-time environment images may include: the cloud server receives a plurality of real-time environment images uploaded by the AR glasses before the current moment, and the acquisition time of the plurality of real-time environment images is continuous and is arranged according to the acquisition time sequence.
The first time may be: and the target object appears for the last time in a plurality of continuous real-time environment images before the current time. The target object appears in the real-time environment image, that is, the preset human body characteristics of the target object are consistent with the human body characteristics of each object in the real-time environment image. The target object does not appear in the real-time environment image, that is, the human body characteristics of the preset target object are inconsistent with the human body characteristics of each object in the real-time environment image.
The time interval between the first moment and the current moment exceeds a preset threshold value, namely, the time that the target object exceeds the preset threshold value does not appear in the monitoring visual field range of the AR glasses, and the target object is considered to be a lost person.
The AR information may be information in the form of AR animation, or information in the form of graphics and text, and the embodiment of the present invention is not particularly limited.
AR information is used to characterize target objects, such as: the AR information may be a face image and a name of the target object in the form of AR animation, and this embodiment is not particularly limited, and any information that can represent the target object may be the AR information.
In this step, since the time interval between the time corresponding to the last occurrence of the target object in the continuous real-time environment images and the current time exceeds the preset threshold, the target object is considered to be a lost person, the high-definition projector is arranged in the AR glasses, and the AR glasses receive and display AR information used for representing the target object through the high-definition projector, so that the target object (the lost person) can be determined quickly.
In this embodiment, the AR glasses collect real-time environment images and upload the real-time environment images to the cloud server, because the time interval between the time when the target object appears last in a plurality of continuous real-time environment images and the current time exceeds a preset threshold, the target object is considered as a lost person, the AR information used for representing the target object (i.e. the lost person) and sent by the cloud server is received and displayed through the high-definition projector, through the AR information, the target object can be quickly determined, the number of people does not need to be continuously counted in a manner of counting points, counting number and the like so as to find whether the lost person exists, and therefore the efficiency of searching the lost person can be improved.
Alternatively, the AR information may include any one of the following cases:
first, the AR information includes: physical characteristics and/or basic information of the target object. Wherein the human body characteristics include at least one of: the human face image, the human face characteristic, the gait characteristic and the body shape characteristic, and the basic information comprises at least one of the following items: name, contact address, name of the emergency contact, and contact address of the emergency contact.
In this embodiment, the AR glasses may receive and display AR information including human features and/or basic information of the target object, such as: the target object can be quickly determined by the human body characteristics and/or the basic information of the target object in the AR information.
Secondly, the AR information further includes: current position information of the target object; the current position information of the target object is obtained based on the intelligent wearable device worn by the target object. That is, the AR information includes not only the human body feature and/or the basic information of the target object but also the current location information of the target object.
In the embodiment, the intelligent wearable device worn by the target object is used for positioning the target object in real time. After the target object is quickly determined through the human body characteristics and/or the basic information of the target object in the AR information, the current position of the target object can be quickly determined through the current position information of the target object in the AR information, and the target object can be conveniently found in time.
Thirdly, the AR information further includes: a target path; the target path is a path from the current position information of the AR glasses to the current position information of the target object. That is, the AR information may include not only the human body feature and/or the basic information of the target object, the current position information of the target object, but also the target path.
In the embodiment, after the target object is quickly determined according to the human body characteristics and/or the basic information of the target object in the AR information and the current position of the target object is quickly determined according to the current position information of the target object in the AR information, the path for searching the target object can be quickly determined according to the target path in the AR information, so that the target object can be found in time conveniently.
In some embodiments, the information prompting method further includes: receiving and displaying the number of objects sent by the cloud server; the number of objects is the number of objects appearing in the real-time environment image.
In the traditional scheme, the number of people needs to be counted by means of counting, counting and the like, and the efficiency is low. In this embodiment, the AR glasses receive and display the number of objects appearing in the real-time environment image, and the number of objects appearing in the real-time environment image is counted in an image recognition manner, so that the efficiency of people counting can be greatly improved.
In other embodiments, the information prompting method further includes: for each object, acquiring a human body image of the object and acquiring input basic information of the object; and uploading the human body image and the basic information of the object to a cloud server, so that the cloud server extracts the human body characteristics of the object based on the human body image of the object and stores the human body characteristics and the basic information of the object.
The human body image of each object may include: face images of the subject, gait video, etc. After the AR glasses upload the human body image of each object to the cloud server, the cloud server can extract human body characteristics such as human face characteristics, gait characteristics and body shape characteristics of the object based on the human body image of the object and store the human body characteristics.
Since the AR glasses can realize a plurality of functions, the AR glasses can be regarded as a micro mobile phone, the basic information of each object can be input into the AR glasses, and after the AR glasses upload the basic information of each object to the cloud server, the cloud server stores the basic information of each object.
In this embodiment, the AR glasses may upload the human body image and the basic information of each object to be supervised to the cloud server in advance, and the cloud server may extract human body features such as a face feature, a gait feature, a body shape feature, and the like of the object based on the human body image of the object, so that the cloud server may store the human body features and the basic information of each object to be supervised in advance.
Referring to fig. 2, fig. 2 is a second schematic flow chart of the information prompting method provided by the present invention. As shown in fig. 2, the information prompting method provided by the present invention is applied to a cloud server, and may include the following steps:
step 201, acquiring a real-time environment image acquired by the AR glasses.
Step 202, determining a time corresponding to the last occurrence of each object in the plurality of continuous real-time environment images based on the plurality of continuous real-time environment images.
And step 203, determining that the object is the target object when the interval between the moment and the current moment is larger than a preset threshold value.
And step 204, generating AR information for representing the target object, and sending the AR information to the AR glasses.
In step 201, the real-time environment image is an image of the current real environment acquired by the AR glasses in real time, and the AR glasses continuously acquire the real-time environment image and upload the real-time environment image to the cloud server. The cloud server acquires real-time environment images acquired by the AR glasses to obtain a plurality of continuous real-time environment images.
In step 202, the continuous plurality of real-time environment images may include: the cloud server receives a plurality of real-time environment images uploaded by the AR glasses before the current moment, and the acquisition time of the plurality of real-time environment images is continuous and is arranged according to the acquisition time sequence.
The time corresponding to the last occurrence of each object in the plurality of real-time environment images may include: the object is displayed at a time corresponding to the last occurrence of the object in a plurality of continuous real-time environment images before the current time. Wherein, the object can be a person needing supervision.
Optionally, the cloud server maintains a schedule for each object to be supervised, where the schedule records the time when the object appears in the consecutive real-time environment images each time, and based on the schedule, the cloud server may quickly determine the time corresponding to the last appearance of the object in the consecutive real-time environment images.
In step 203, in a case that an interval between a time corresponding to the last occurrence of the object in the continuous real-time environment images and the current time is greater than a preset threshold, that is, the time that the object exceeds the preset threshold does not occur within the monitoring visual field of the AR glasses, the cloud server determines that the object is a target object (a lost person).
In step 204, the AR information may be information in the form of AR animation, or information in the form of graphics and text, which is not limited in the embodiment of the present invention.
AR information is used to characterize a target object, for example: the AR information may be a face image and a name of the target object in the form of AR animation, and this embodiment is not particularly limited, and any information that can represent the target object may be the AR information.
In the step, the cloud server generates AR information used for representing the target object and sends the AR information to the AR glasses, the high-definition projector is arranged in the AR glasses, and the AR information used for representing the target object can be displayed through the high-definition projector, so that the target object (lost personnel) can be quickly determined.
In this embodiment, the cloud server acquires real-time environment images acquired by the AR glasses, because a time interval between a time corresponding to a last occurrence of a target object in a plurality of continuous real-time environment images and a current time exceeds a preset threshold, the target object is considered to be a lost person, the cloud server generates AR information for representing the target object, and sends the AR information to the AR glasses, so that the AR glasses display the AR information for representing the target object through the high-definition projector, the target object (lost person) can be determined quickly, the number of people is counted to find whether the lost person exists without continuously using a point count mode, a number report mode and the like, and therefore, efficiency of finding the lost person can be improved.
Optionally, for each object, determining whether the object appears in the real-time environment image by: comparing the characteristics of the human body corresponding to each object in the real-time environment image based on the preset human body characteristics of the object; determining that the object appears in the real-time environment image under the condition that the human body features are compared and consistent; and under the condition that the human body feature comparison is inconsistent, determining that the object does not appear in the real-time environment image.
Specifically, the preset human body features of each object refer to: and human body characteristics of each object needing supervision are stored in the cloud server in advance. Optionally, the human body characteristics include, but are not limited to, at least one of: the system comprises a face image, face features, gait features and body shape features.
The human body features corresponding to each object in the real-time environment image refer to: the cloud server identifies human body characteristics corresponding to each object based on the real-time environment image.
And for each object, determining that the object appears in the real-time environment image under the condition that the preset human body characteristics of the object are judged to be consistent with the human body characteristics of all objects in the real-time environment image by the cloud server.
And for each object, determining that the object does not appear in the real-time environment image under the condition that the preset human body characteristics of the object are not consistent with the human body characteristics of all objects in the real-time environment image by the cloud server.
In this embodiment, the cloud server performs feature comparison on preset human body features of each object and human body features corresponding to each object in the real-time environment image, so as to quickly determine whether the object appears in the real-time environment image.
Optionally, in step 204, generating AR information for characterizing the target object includes: searching basic information of a target object from prestored basic information of a plurality of objects; AR information is generated based on the human features and/or the basic information of the target object. Optionally, the basic information includes, but is not limited to, at least one of: name, contact address, name of the emergency contact, and contact address of the emergency contact.
In this embodiment, the cloud server may generate the AR information including the human body feature and/or the basic information of the target object, and the AR glasses receive and display the AR information, that is, the target object may be quickly determined by the human body feature and/or the basic information of the target object in the AR information.
Optionally, in the above embodiment, generating the AR information based on the human body features and/or the basic information of the target object includes: acquiring current position information of a target object, recorded by intelligent wearable equipment worn by the target object; AR information is generated based on the human body features and/or basic information of the target object and the current position information of the target object.
In this embodiment, the intelligent wearing equipment that the target object wore is used for fixing a position the target object in real time. The cloud server may generate AR information including human body features and/or basic information of the target object and current location information of the target object, which is received and displayed by the AR glasses. After the target object is quickly determined according to the human body characteristics and/or the basic information of the target object in the AR information displayed by the AR glasses, the current position of the target object can be quickly determined according to the current position information of the target object in the AR information, and the target object can be conveniently found in time.
Optionally, in the above embodiment, generating the AR information based on the human body features and/or the basic information of the target object and the current position information of the target object includes: acquiring current position information of the AR glasses; determining a path from the current position information of the AR glasses to the current position information of the target object based on the map network data; based on the human body features and/or basic information of the target object, current position information of the target object, and the path, AR information is generated.
Specifically, a route from the current position information of the AR glasses to the current position information of the target object is planned based on the map network data, with the current position information of the AR glasses as a start point and the current position information of the target object as an end point.
In this embodiment, the cloud server may generate the AR information including the human body features and/or basic information of the target object, the current location information of the target object, and the path, and the AR glasses receive and display the AR information. After the target object is quickly determined according to the human body characteristics and/or basic information of the target object in the AR information displayed by the AR glasses and the current position of the target object is quickly determined according to the current position information of the target object in the AR information, the path for searching the target object can be quickly determined according to the path in the AR information, and the target object can be conveniently found in time.
Optionally, the information prompting method further includes: counting the number of objects appearing in the real-time environment image; the number of objects appearing in the real-time environment image is sent to the AR glasses.
In the traditional scheme, the number of people needs to be counted by means of counting, counting and the like, and the efficiency is low. In this embodiment, the cloud server counts the number of objects appearing in the real-time environment image, and transmits the number of objects appearing in the real-time environment image to the AR glasses. The number of objects appearing in the real-time environment image is counted in an image recognition mode, and the counting efficiency of the number of people can be greatly improved.
Optionally, the information prompting method further includes: acquiring a human body image and basic information of each object, which are uploaded by AR glasses, of the object; extracting human body features of the object based on the human body image of the object; storing the physical characteristics and basic information of the object.
The human body image of each object may include: face images of the subject, gait video, etc. The cloud server acquires the human body image of the object uploaded by the AR glasses, and can extract and store human body characteristics such as human face characteristics, gait characteristics and body shape characteristics of the object based on the human body image of the object. And the cloud server acquires and stores the basic information of each object.
In this embodiment, the cloud server obtains a human body image and basic information of each object to be supervised, which are uploaded by the AR glasses in advance, and can extract human body features such as a face feature, a gait feature, a body shape feature, and the like of the object based on the human body image of the object, and then pre-store the human body features and the basic information of each object to be supervised.
Optionally, the information prompting method further includes: acquiring a monitoring video of a current environment; performing video processing on a monitoring video of a current environment to obtain first human body characteristics of a plurality of objects; the stored physical characteristics of the subject are supplemented and/or updated based on the first physical characteristics of the subject.
The current environment may be an activity venue in which a plurality of objects currently in need of supervision are located.
The surveillance video of the current environment may be: monitoring videos shot by monitoring equipment installed in a playground where a plurality of objects needing to be monitored currently are located.
The first human body characteristic of each object may be: and the cloud server extracts the human body characteristics of the object through the monitoring video of the current environment.
In this embodiment, the following problems may occur due to human body characteristics stored in advance by the cloud server: 1) The stored gait characteristics are not comprehensive; 2) The method comprises the steps that under the condition that the face and the body of an object needing to be supervised change along with time, pre-stored face features, body shape features and face images may not be used, a cloud server carries out video processing on a monitoring video of the current environment to obtain first human body features of a plurality of objects, the pre-stored gait features of the object can be supplemented based on the first human body features of each object, and the face features, the body shape features and the face images of the object can also be updated, so that the human body features of each object stored in the cloud server are more comprehensive and accurate.
Referring to fig. 3, fig. 3 is an interactive flow diagram of the information prompting method provided by the present invention. As shown in fig. 3, the information prompting method provided by the present invention may include the following steps:
step 301, the AR glasses collect real-time environment images and upload the real-time environment images to a cloud server.
Step 302, the cloud server determines the time corresponding to the last occurrence of each object in the continuous real-time environment images based on the continuous real-time environment images; determining the object as a target object under the condition that the interval between the moment and the current moment is greater than a preset threshold value; and generating AR information for representing the target object, and sending the AR information to the AR glasses.
And 303, displaying the AR information sent by the cloud server by the AR glasses.
In step 301, the real-time environment image is an image of the current real environment acquired by the AR glasses in real time, and the AR glasses continuously acquire the real-time environment image and upload the real-time environment image to the cloud server.
In implementations, the supervisor can wear the AR glasses and continuously look around the surroundings. In the process that the supervision personnel look around, the AR glasses constantly gather real-time environment image to upload real-time environment image to the cloud ware.
In step 302, the plurality of real-time environment images may be: the cloud server receives a plurality of real-time environment images uploaded by the AR glasses before the current moment, and the acquisition time of the plurality of real-time environment images is continuous and is arranged according to the acquisition time sequence.
The time corresponding to the last occurrence of each object in the consecutive real-time environment images may be: the object is displayed at a time corresponding to the last occurrence of the object in a plurality of continuous real-time environment images before the current time. Wherein, the object can be a person needing supervision.
Optionally, the cloud server maintains a schedule for each object to be supervised, where the schedule records the time when the object appears in the consecutive real-time environment images each time, and based on the schedule, the cloud server may quickly determine the time corresponding to the last appearance of the object in the consecutive real-time environment images.
Under the condition that the interval between the moment corresponding to the last appearance of the object in the continuous real-time environment images and the current moment is larger than a preset threshold value, namely, the time that the object exceeds the preset threshold value does not appear in the monitoring visual field range of the AR glasses, the cloud server determines that the object is a target object (lost person).
The AR information may be information in the form of AR animation, or information in the form of graphics and text, and the embodiment of the present invention is not particularly limited.
AR information is used to characterize a target object, for example: the AR information may be a face image and a name of the target object in the form of AR animation, and any information that can represent the target object may be the AR information without specific limitation in this embodiment.
In this step, the cloud server determines the target object, generates AR information for characterizing the target object, and sends the AR information to the AR glasses.
In step 303, a high definition projector is disposed inside the AR glasses, and the AR glasses display AR information used for representing the target object through the high definition projector, so that the target object (lost person) can be determined quickly.
In this embodiment, the AR glasses collect real-time environment images and upload the real-time environment images to the cloud server, the cloud server determines, based on the plurality of continuous real-time environment images, a time corresponding to the last occurrence of each object in the plurality of continuous real-time environment images, determines that the object is a target object (lost person) when an interval between the time and the current time is greater than a preset threshold, generates AR information used for representing the target object and sends the AR information to the AR glasses, the AR glasses receive and display the AR information sent by the cloud server through the high-definition projector, the target object can be determined quickly, the number of people does not need to be counted continuously through points, number reports and the like to find whether the lost person exists, and efficiency of finding the lost person can be improved.
Optionally, in step 302, for each object, the cloud server determines whether the object appears in the real-time environment image by: comparing the characteristics of the human body corresponding to each object in the real-time environment image based on the preset human body characteristics of the object; determining that the object appears in the real-time environment image under the condition that the human body characteristics are consistent in comparison; and under the condition that the human body feature comparison is inconsistent, determining that the object does not appear in the real-time environment image.
Specifically, the preset human body features of each object refer to: and human body characteristics of each object needing supervision are stored in the cloud server in advance. Optionally, the human characteristics include at least one of: the human face image, the human face characteristic, the gait characteristic and the body shape characteristic.
The human body features corresponding to each object in the real-time environment image refer to: the cloud server identifies human body characteristics corresponding to each object based on the real-time environment image.
And for each object, determining that the object appears in the real-time environment image under the condition that the cloud server judges that the preset human body characteristics of the object are consistent with the human body characteristics of all objects in the real-time environment image.
And for each object, determining that the object does not appear in the real-time environment image under the condition that the preset human body characteristics of the object are not consistent with the human body characteristics of all objects in the real-time environment image by the cloud server.
In this embodiment, the cloud server performs feature comparison on preset human body features of each object and human body features corresponding to each object in the real-time environment image, so as to quickly determine whether the object appears in the real-time environment image.
Optionally, step 302 may include the following sub-steps: the cloud server searches basic information of a target object from the prestored basic information of a plurality of objects; AR information is generated based on the human features and/or the basic information of the target object. Optionally, the basic information comprises at least one of: name, contact address, emergency contact name, and contact address of the emergency contact.
In this embodiment, the cloud server may generate the AR information including the human body feature and/or the basic information of the target object, and the AR glasses receive and display the AR information, that is, the target object may be quickly determined by the human body feature and/or the basic information of the target object in the AR information.
Optionally, step 302 may include the sub-steps of: the cloud server acquires current position information of a target object, recorded by intelligent wearable equipment worn by the target object; AR information is generated based on the human body features and/or basic information of the target object and current position information of the target object.
In this embodiment, the intelligent wearing equipment that the target object wore is used for fixing a position the target object in real time. The cloud server may generate AR information including human body features and/or basic information of the target object and current location information of the target object, which is received and displayed by the AR glasses. After the target object is quickly determined according to the human body characteristics and/or the basic information of the target object in the AR information displayed by the AR glasses, the current position of the target object can be quickly determined according to the current position information of the target object in the AR information, and the target object can be conveniently found in time.
Optionally, step 302 may include the following sub-steps: the method comprises the steps that a cloud server obtains current position information of AR glasses; determining a path from the current position information of the AR glasses to the current position information of the target object based on the map network data; based on the human body features and/or basic information of the target object, current position information of the target object, and the path, AR information is generated.
Specifically, a route from the current position information of the AR glasses to the current position information of the target object is generated based on the map network data, with the current position information of the AR glasses as a start point and the current position information of the target object as an end point.
In this embodiment, the cloud server may generate the AR information including the human body features and/or basic information of the target object, the current location information of the target object, and the path, and the AR glasses receive and display the AR information. After the target object is quickly determined according to the human body characteristics and/or the basic information of the target object in the AR information displayed by the AR glasses, and the current position of the target object is quickly determined according to the current position information of the target object in the AR information, the path for searching the target object can be quickly determined according to the path in the AR information, so that the target object can be found in time conveniently.
Optionally, the method further comprises: the cloud server counts the number of objects appearing in the real-time environment image; sending the number of objects appearing in the real-time environment image to the AR glasses; the AR glasses display the number of objects sent by the cloud server.
In the traditional scheme, people count needs to be carried out through counting, number reporting and other modes, and the efficiency is low. In this embodiment, the cloud server counts the number of objects appearing in the real-time environment image, and sends the number of objects appearing in the real-time environment image to the AR glasses, and the AR glasses display the number of objects sent by the cloud server. The number of the objects appearing in the real-time environment image is counted in an image recognition mode, and the counting efficiency of the number of people can be greatly improved.
Optionally, the method further comprises: for each object, acquiring a human body image of the object by using AR glasses, and acquiring input basic information of the object; and uploading the human body image and the basic information of the object to a cloud server. The cloud server acquires the environment image and the basic information of the object uploaded by the AR glasses; extracting human body characteristics of the object based on the environment image of the object; the physical characteristics and basic information of the object are stored.
The human body image of each object may include: face images of the subject, gait video, etc. The cloud server acquires the human body image of the object uploaded by the AR glasses, and can extract human body characteristics such as human face characteristics, gait characteristics and body shape characteristics of the object based on the human body image of the object and store the human body characteristics.
Since the AR glasses can realize a plurality of functions, the AR glasses can be regarded as a micro mobile phone, the basic information of each object can be input into the AR glasses, and after the AR glasses upload the basic information of each object to the cloud server, the cloud server stores the basic information of each object.
In this embodiment, the AR glasses may upload the human body image and the basic information of each object to be supervised to the cloud server in advance, and the cloud server may extract human body features such as a face feature, a gait feature, and a body shape feature of the object based on the human body image of the object, and then prestore the human body features and the basic information of each object to be supervised.
Further, the information prompting method may further include: the cloud server acquires a monitoring video of the current environment; performing video processing on a monitoring video of a current environment to obtain first human body characteristics of a plurality of objects; and supplementing and/or updating the stored human body characteristics of each object based on the first human body characteristics of the object.
The current environment may be an activity venue in which a plurality of objects currently in need of supervision are located.
The surveillance video of the current environment may be: the monitoring video is shot by monitoring equipment installed in a playground where a plurality of objects needing to be monitored are located.
The first human characteristic of each object may be: and the cloud server extracts the human body characteristics of the object through the monitoring video of the current environment.
In this embodiment, the following problems may occur due to human body characteristics stored in advance by the cloud server: 1) The stored gait characteristics are not comprehensive; 2) The method comprises the steps that under the condition that the face and the body of an object needing to be supervised change along with time, pre-stored face features, body shape features and face images may not be used, a cloud server carries out video processing on a monitoring video of the current environment to obtain first human body features of a plurality of objects, the pre-stored gait features of the object can be supplemented based on the first human body features of each object, and the face features, the body shape features and the face images of the object can also be updated, so that the human body features of each object stored in the cloud server are more comprehensive and accurate.
The following describes the information presentation apparatus provided by the present invention, and the information presentation apparatus described below and the information presentation method described above may be referred to in correspondence with each other.
Referring to fig. 4, fig. 4 is a schematic structural diagram of an information prompt device provided by the present invention. As shown in fig. 4, the information prompt apparatus provided by the present invention may include:
the acquisition uploading module 10 is used for acquiring a real-time environment image and uploading the real-time environment image to a cloud server;
a receiving and displaying module 20, configured to receive and display the AR information sent by the cloud server; the AR information is used for representing a target object, the target object is determined based on a plurality of continuous real-time environment images, and the target object meets the following preset conditions: and the time interval between the first moment and the current moment exceeds a preset threshold, wherein the first moment is the moment corresponding to the last occurrence of the target object in the continuous real-time environment images.
Optionally, the AR information includes: human body features and/or basic information of the target object, the human body features including at least one of: the human face image, the human face characteristic, the gait characteristic and the body shape characteristic, and the basic information comprises at least one of the following items: name, contact address, name of the emergency contact, and contact address of the emergency contact.
Optionally, the AR information further includes: current location information of the target object; the current position information of the target object is obtained based on intelligent wearable equipment worn by the target object.
Optionally, the AR information further includes: a target path; the target path is a path from current position information of the AR glasses to current position information of the target object.
Optionally, the device further includes a number display module, and the number display module is specifically configured to:
receiving and displaying the number of objects sent by the cloud server; the number of objects is a number of objects appearing in the real-time environment image.
Optionally, the apparatus further includes an information uploading module, where the information uploading module is specifically configured to:
acquiring an environment image of each object, and acquiring input basic information of the object;
uploading the environment image and the basic information of the object to the cloud server, so that the cloud server extracts the human body characteristics of the object based on the environment image of the object and stores the human body characteristics and the basic information of the object.
The information prompting device provided by the embodiment of the invention can be AR glasses, can realize each process realized by the method embodiment of FIG. 1, achieves the same technical effect, and is not repeated here for avoiding repetition.
Referring to fig. 5, fig. 5 is a second schematic structural diagram of an information prompting device provided by the present invention. As shown in fig. 5, the information prompting device provided by the present invention may include:
the image acquisition module 30 is used for acquiring a real-time environment image collected by the AR glasses;
a time determination module 40, configured to determine, based on a plurality of consecutive real-time environment images, a time corresponding to a last occurrence of each object in the plurality of consecutive real-time environment images;
an object determining module 50, configured to determine that the object is a target object when an interval between the time and the current time is greater than a preset threshold;
an information generating and sending module 60, configured to generate AR information used for characterizing the target object, and send the AR information to the AR glasses.
Optionally, the time determination module 40 is specifically configured to:
comparing the characteristics of the human body corresponding to each object in the real-time environment image based on the preset human body characteristics of the object;
determining that the object appears in the real-time environment image under the condition that the human body features are consistent in comparison;
and under the condition that the human body feature comparison is inconsistent, determining that the object does not appear in the real-time environment image.
Optionally, the human features comprise at least one of: the system comprises a face image, face features, gait features and body shape features.
Optionally, the information generating and sending module 60 is specifically configured to:
searching basic information of the target object from pre-stored basic information of a plurality of objects;
generating AR information based on the human body features and/or the basic information of the target object.
Optionally, the basic information comprises at least one of: name, contact address, name of the emergency contact, and contact address of the emergency contact.
Optionally, the information generating and sending module 60 is specifically configured to:
acquiring current position information of the target object, recorded by intelligent wearable equipment worn by the target object;
and generating AR information based on the human body characteristics and/or basic information of the target object and the current position information of the target object.
Optionally, the information generating and sending module 60 is specifically configured to:
acquiring current position information of the AR glasses;
determining a path from the current position information of the AR glasses to the current position information of the target object based on the map network data;
generating AR information based on the human body features and/or basic information of the target object, the current position information of the target object, and the path.
Optionally, the apparatus further comprises a people counting module, and the people counting module is specifically configured to:
counting the number of objects appearing in the real-time environment image;
sending the number of objects appearing in the real-time environment image to the AR glasses.
Optionally, the apparatus further includes an information pre-storing module, where the information pre-storing module is specifically configured to:
acquiring an environment image and basic information of each object uploaded by the AR glasses;
extracting human body features of the object based on the environment image of the object;
storing the human body features and the basic information of the object.
Optionally, the information pre-storing module is further configured to:
acquiring a monitoring video of a current environment;
performing video processing on the monitoring video of the current environment to obtain first human body characteristics of a plurality of objects;
supplementing and/or updating the stored physical characteristics of the subject based on the first physical characteristics of the subject.
The information prompting device provided by the embodiment of the invention can be a cloud server, can realize each process realized by the method embodiment of fig. 2, achieves the same technical effect, and is not repeated here for avoiding repetition.
Fig. 6 illustrates a physical structure diagram of AR glasses, and as shown in fig. 6, the AR glasses may include: a processor (processor) 810, a communication Interface 820, a memory 830, a communication bus 840 and a display 850, wherein the processor 810, the communication Interface 820, the memory 830 and the display 850 communicate with each other via the communication bus 840.
Processor 810 may call logic instructions in memory 830 to perform an information prompting method comprising:
acquiring a real-time environment image, and uploading the real-time environment image to a cloud server;
receiving and displaying AR information sent by the cloud server; the AR information is used for representing a target object, the target object is determined based on a plurality of continuous real-time environment images, and the target object meets the following preset conditions: and the time interval between the first moment and the current moment exceeds a preset threshold, wherein the first moment is the moment corresponding to the last occurrence of the target object in the continuous real-time environment images.
In addition, the logic instructions in the memory 830 may be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention or a part thereof which substantially contributes to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Fig. 7 illustrates a schematic physical structure diagram of a cloud server, and as shown in fig. 7, the cloud server may include: a processor 910, a network interface 920, and a memory 930.
Specifically, the cloud server according to the embodiment of the present invention further includes: the instructions or programs stored in the memory 930 and executable on the processor 910, and the processor 910 invokes the instructions or programs in the memory 930 to perform the methods executed by the modules shown in fig. 5, and achieve the same technical effects, which are not described herein for avoiding repetition.
In another aspect, the invention also provides a computer program product comprising a computer program, the computer program being storable on a non-transitory computer readable storage medium. When the computer program is executed by the processor, the computer can execute the processes of the information prompting method embodiment applied to the AR glasses or the processes of the information prompting method embodiment applied to the cloud server, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
In another aspect, the present invention further provides a non-transitory computer readable storage medium, on which a computer program is stored, where the computer program is implemented to execute the above processes of the embodiment of the information prompting method applied to the AR glasses or the processes of the embodiment of the information prompting method applied to the cloud server when executed by a processor, and the same technical effects can be achieved, and are not described herein again to avoid repetition.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (21)

1. An information prompting method is applied to AR glasses, and comprises the following steps:
acquiring a real-time environment image, and uploading the real-time environment image to a cloud server;
receiving and displaying AR information sent by the cloud server; the AR information is used for representing a target object, the target object is determined based on a plurality of continuous real-time environment images, and the target object meets the following preset conditions: and the time interval between the first moment and the current moment exceeds a preset threshold, wherein the first moment is the moment corresponding to the last occurrence of the target object in the continuous real-time environment images.
2. The information prompting method of claim 1, wherein the AR information comprises: human body features and/or basic information of the target object, the human body features including at least one of: the human face image, the human face characteristic, the gait characteristic and the body shape characteristic, and the basic information comprises at least one of the following items: name, contact address, name of the emergency contact, and contact address of the emergency contact.
3. The information prompting method of claim 2, wherein the AR information further comprises: current location information of the target object; the current position information of the target object is obtained based on the intelligent wearable device worn by the target object.
4. The information prompting method of claim 3, wherein the AR information further comprises: a target path; the target path is a path from current position information of the AR glasses to current position information of the target object.
5. The information prompting method according to claim 1, characterized in that the method further comprises:
receiving and displaying the number of the objects sent by the cloud server; the number of objects is a number of objects appearing in the real-time environment image.
6. The information prompting method according to claim 2, characterized in that the method further comprises:
acquiring a human body image of each object and acquiring input basic information of the object;
uploading the human body image and the basic information of the object to the cloud server, so that the cloud server extracts the human body characteristics of the object based on the human body image of the object and stores the human body characteristics and the basic information of the object.
7. An information prompting method is applied to a cloud server, and comprises the following steps:
acquiring a real-time environment image acquired by AR glasses;
determining a time corresponding to the last occurrence of each object in a plurality of continuous real-time environment images based on the plurality of continuous real-time environment images;
determining the object as a target object under the condition that the interval between the moment and the current moment is greater than a preset threshold value;
and generating AR information used for representing the target object, and sending the AR information to the AR glasses.
8. An information presentation method as claimed in claim 7, wherein, for each object, it is determined whether the object appears in the real-time environment image by:
comparing the characteristics of the human body characteristics corresponding to each object in the real-time environment image based on the preset human body characteristics of the object;
determining that the object appears in the real-time environment image under the condition that the human body features are consistent in comparison;
and under the condition that the human body characteristic comparison is inconsistent, determining that the object does not appear in the real-time environment image.
9. An information prompt method according to claim 8, wherein the human body characteristics include at least one of: the human face image, the human face characteristic, the gait characteristic and the body shape characteristic.
10. The information prompting method of any one of claims 7-9, wherein the generating the AR information for characterizing the target object comprises:
searching basic information of the target object from pre-stored basic information of a plurality of objects;
and generating AR information based on the human body characteristics and/or the basic information of the target object.
11. The information prompting method according to claim 10, wherein the basic information includes at least one of: name, contact address, name of the emergency contact, and contact address of the emergency contact.
12. The information prompting method according to claim 11, wherein the generating AR information based on the human body feature and/or the basic information of the target object includes:
acquiring current position information of the target object, recorded by intelligent wearable equipment worn by the target object;
generating AR information based on the human body features and/or the basic information of the target object and the current position information of the target object.
13. The information prompting method according to claim 12, wherein the generating AR information based on the human body feature and/or the basic information of the target object and the current position information of the target object includes:
acquiring current position information of the AR glasses;
determining a path from the current position information of the AR glasses to the current position information of the target object based on the map road network data;
generating AR information based on the human body features and/or basic information of the target object, the current position information of the target object, and the path.
14. The information prompting method according to claim 8, further comprising:
counting the number of objects appearing in the real-time environment image;
sending the number of objects appearing in the real-time environment image to the AR glasses.
15. The information prompting method according to claim 7, further comprising:
acquiring a human body image and basic information of the object uploaded by the AR glasses aiming at each object;
extracting human body features of the object based on the human body image of the object;
storing the human body features and the basic information of the object.
16. An information prompting method as defined in claim 15, further comprising:
acquiring a monitoring video of a current environment;
performing video processing on the monitoring video of the current environment to obtain first human body characteristics of a plurality of objects;
supplementing and/or updating the stored physical characteristics of the subject based on the first physical characteristics of the subject.
17. An information presentation device, comprising:
the acquisition uploading module is used for acquiring a real-time environment image and uploading the real-time environment image to the cloud server;
the receiving and displaying module is used for receiving and displaying the AR information sent by the cloud server; the AR information is used for representing a target object, the target object is determined based on a plurality of continuous real-time environment images, and the target object meets the following preset conditions: and the time interval between the first moment and the current moment exceeds a preset threshold, wherein the first moment is the moment corresponding to the last occurrence of the target object in the continuous real-time environment images.
18. An information presentation device, comprising:
the image acquisition module is used for acquiring a real-time environment image acquired by the AR glasses;
the time determining module is used for determining the time corresponding to the last occurrence of each object in the continuous real-time environment images on the basis of the continuous real-time environment images;
the object determination module is used for determining the object as a target object under the condition that the interval between the moment and the current moment is greater than a preset threshold value;
and the information generating and sending module is used for generating AR information used for representing the target object and sending the AR information to the AR glasses.
19. AR glasses comprising a memory, a processor and a computer program stored on said memory and executable on said processor, characterized in that said processor implements the information prompting method according to any one of claims 1 to 6 when executing said program.
20. A cloud server comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the information prompting method according to any one of claims 7 to 16 when executing the program.
21. A non-transitory computer-readable storage medium on which a computer program is stored, the computer program, when executed by a processor, implementing the information presentation method according to any one of claims 1 to 6 or the information presentation method according to any one of claims 7 to 16.
CN202210820867.6A 2022-07-12 2022-07-12 Information prompting method and device, AR glasses, cloud server and storage medium Pending CN115346333A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210820867.6A CN115346333A (en) 2022-07-12 2022-07-12 Information prompting method and device, AR glasses, cloud server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210820867.6A CN115346333A (en) 2022-07-12 2022-07-12 Information prompting method and device, AR glasses, cloud server and storage medium

Publications (1)

Publication Number Publication Date
CN115346333A true CN115346333A (en) 2022-11-15

Family

ID=83948034

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210820867.6A Pending CN115346333A (en) 2022-07-12 2022-07-12 Information prompting method and device, AR glasses, cloud server and storage medium

Country Status (1)

Country Link
CN (1) CN115346333A (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590212A (en) * 2017-08-29 2018-01-16 深圳英飞拓科技股份有限公司 The Input System and method of a kind of face picture
CN109040952A (en) * 2018-07-26 2018-12-18 北京联合大学 One kind being based on GPS positioning data processing, the old man of gsm module communication and infant missing preventing system
CN110363150A (en) * 2019-07-16 2019-10-22 深圳市商汤科技有限公司 Data-updating method and device, electronic equipment and storage medium
US20190384991A1 (en) * 2019-07-25 2019-12-19 Lg Electronics Inc. Method and apparatus of identifying belonging of user based on image information
CN111479055A (en) * 2020-04-10 2020-07-31 Oppo广东移动通信有限公司 Shooting method, shooting device, electronic equipment and storage medium
CN112088287A (en) * 2018-05-09 2020-12-15 万屋菊洋 Portable terminal device and search system
CN112733620A (en) * 2020-12-23 2021-04-30 深圳酷派技术有限公司 Information prompting method and device, storage medium and electronic equipment
CN113362221A (en) * 2021-04-29 2021-09-07 南京甄视智能科技有限公司 Face recognition system and face recognition method for entrance guard
CN113852740A (en) * 2021-09-18 2021-12-28 广东睿住智能科技有限公司 Anti-lost system and method and readable storage medium thereof

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590212A (en) * 2017-08-29 2018-01-16 深圳英飞拓科技股份有限公司 The Input System and method of a kind of face picture
CN112088287A (en) * 2018-05-09 2020-12-15 万屋菊洋 Portable terminal device and search system
CN109040952A (en) * 2018-07-26 2018-12-18 北京联合大学 One kind being based on GPS positioning data processing, the old man of gsm module communication and infant missing preventing system
CN110363150A (en) * 2019-07-16 2019-10-22 深圳市商汤科技有限公司 Data-updating method and device, electronic equipment and storage medium
US20190384991A1 (en) * 2019-07-25 2019-12-19 Lg Electronics Inc. Method and apparatus of identifying belonging of user based on image information
CN111479055A (en) * 2020-04-10 2020-07-31 Oppo广东移动通信有限公司 Shooting method, shooting device, electronic equipment and storage medium
CN112733620A (en) * 2020-12-23 2021-04-30 深圳酷派技术有限公司 Information prompting method and device, storage medium and electronic equipment
CN113362221A (en) * 2021-04-29 2021-09-07 南京甄视智能科技有限公司 Face recognition system and face recognition method for entrance guard
CN113852740A (en) * 2021-09-18 2021-12-28 广东睿住智能科技有限公司 Anti-lost system and method and readable storage medium thereof

Similar Documents

Publication Publication Date Title
CN109922310B (en) Target object monitoring method, device and system
KR101655102B1 (en) System and method for processing visual information for event detection
CN108040221A (en) A kind of intelligent video analysis and monitoring system
CN111263114B (en) Abnormal event alarm method and device
US20180342070A1 (en) Methods and systems of determining object status for false positive removal in object tracking for video analytics
US20190130582A1 (en) Exclusion zone in video analytics
CN106254952A (en) video quality dynamic control method and device
CN109460744B (en) Video monitoring system based on deep learning
CN109657626B (en) Analysis method for recognizing human body behaviors
CN110147723B (en) Method and system for processing abnormal behaviors of customers in unmanned store
US20230206093A1 (en) Music recommendation method and apparatus
CN111405222B (en) Video alarm method, video alarm system and alarm picture acquisition method
CN111010547A (en) Target object tracking method and device, storage medium and electronic device
US20180046857A1 (en) Methods and systems of updating motion models for object trackers in video analytics
CN103227916B (en) A kind of monitor video backup method, Apparatus and system
CN109889776A (en) Method for processing video frequency, device, computer installation and computer readable storage medium
CN112070052A (en) Interval monitoring method, device and system and storage medium
CN112699328A (en) Network point service data processing method, device, system, equipment and storage medium
CN113361364B (en) Target behavior detection method, device, equipment and storage medium
CN113793366A (en) Image processing method, device, equipment and storage medium
CN115346333A (en) Information prompting method and device, AR glasses, cloud server and storage medium
CN113628172A (en) Intelligent detection algorithm for personnel handheld weapons and smart city security system
CN103051883A (en) Intelligent monitoring system of scientific and technological community
CN113642519A (en) Face recognition system and face recognition method
CN114187562A (en) Campus safety early warning system based on behavior analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination