CN112733620A - Information prompting method and device, storage medium and electronic equipment - Google Patents
Information prompting method and device, storage medium and electronic equipment Download PDFInfo
- Publication number
- CN112733620A CN112733620A CN202011547756.XA CN202011547756A CN112733620A CN 112733620 A CN112733620 A CN 112733620A CN 202011547756 A CN202011547756 A CN 202011547756A CN 112733620 A CN112733620 A CN 112733620A
- Authority
- CN
- China
- Prior art keywords
- target object
- image
- information
- target
- acquiring
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 88
- 230000015654 memory Effects 0.000 claims description 26
- 230000006870 function Effects 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 6
- 230000007613 environmental effect Effects 0.000 claims description 5
- 238000012544 monitoring process Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 description 27
- 238000005516 engineering process Methods 0.000 description 19
- 238000010586 diagram Methods 0.000 description 11
- 238000012549 training Methods 0.000 description 11
- 238000012545 processing Methods 0.000 description 10
- 238000007726 management method Methods 0.000 description 9
- 230000000007 visual effect Effects 0.000 description 8
- 238000011179 visual inspection Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 6
- 238000013527 convolutional neural network Methods 0.000 description 6
- 238000002372 labelling Methods 0.000 description 5
- 241001465754 Metazoa Species 0.000 description 4
- 238000013500 data storage Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 230000004927 fusion Effects 0.000 description 4
- 239000011521 glass Substances 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 241000196324 Embryophyta Species 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 239000003814 drug Substances 0.000 description 3
- 229940079593 drug Drugs 0.000 description 3
- 238000005259 measurement Methods 0.000 description 3
- 238000003062 neural network model Methods 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 239000008186 active pharmaceutical agent Substances 0.000 description 2
- 230000003190 augmentative effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 238000007477 logistic regression Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 238000001931 thermography Methods 0.000 description 2
- 235000009470 Theobroma cacao Nutrition 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 244000240602 cacao Species 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000001647 drug administration Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the application discloses an information prompting method, an information prompting device, a storage medium and electronic equipment, wherein the method comprises the following steps: the method comprises the steps of collecting a current environment image in a search mode aiming at a target object, carrying out object recognition on the environment image, determining and recognizing the target object, determining prompt information aiming at the target object, generating the target image based on the prompt information and the environment image, and outputting the target image. By adopting the embodiment of the application, the efficiency of object searching can be improved.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to an information prompting method and apparatus, a storage medium, and an electronic device.
Background
In daily life, the situation of searching for objects often occurs, for example, glasses, bus cards and the like cannot be searched. For another example, when a special object is searched in a large outdoor range, such as searching medicinal materials in mountainous areas, searching small animals in jungles, and the like; at this time, in most cases, the user experience is used to find the article by visual inspection.
Disclosure of Invention
The embodiment of the application provides an information prompting method, an information prompting device, a storage medium and electronic equipment, and can improve the efficiency of object searching. The technical scheme of the embodiment of the application is as follows:
in a first aspect, an embodiment of the present application provides an information prompting method, which is applied to an AR display device, and the method includes:
acquiring a current environment image in a search mode aiming at a target object;
carrying out object recognition on the environment image, and determining that the target object is recognized;
determining prompt information for the target object, generating a target image based on the prompt information and the environment image, and outputting the target image.
In a second aspect, an embodiment of the present application provides an information prompting apparatus, where the apparatus includes:
the image acquisition module is used for acquiring a current environment image in a search mode aiming at a target object;
the object recognition module is used for carrying out object recognition on the environment image and determining that the target object is recognized;
and the image output module is used for determining prompt information aiming at the target object, generating a target image based on the prompt information and the environment image and outputting the target image.
In a third aspect, embodiments of the present application provide a computer storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the above-mentioned method steps.
In a fourth aspect, an embodiment of the present application provides an electronic device, which may include: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the above-mentioned method steps.
The beneficial effects brought by the technical scheme provided by some embodiments of the application at least comprise:
in one or more embodiments of the present application, the AR display device may acquire a current environment image in a search mode for a target object; carrying out object recognition on the environment image, and determining to recognize the target object; the AR display device may determine prompt information for the target object, generate a target image based on the prompt information and the environment image, and output the target image. The method can acquire and determine the related prompt information aiming at the target object when the target object is searched, thereby prompting the information in the searching process, improving the efficiency of searching the object without adopting a visual inspection mode with low efficiency by a user, facilitating the searching of the corresponding object of the object, acquiring the information (such as attribute, position and the like) of the target object after the target object is found, and prompting the user, thereby improving the convenience and intelligence in the searching process.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of an information prompting method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of another information prompting method provided in the embodiment of the present application;
fig. 3 is a schematic structural diagram of an information prompt apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an image acquisition module according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an image capturing unit according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an object identification module according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of another information prompt device provided in the embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 9 is a schematic structural diagram of an operating system and a user space provided in an embodiment of the present application;
FIG. 10 is an architectural diagram of the android operating system of FIG. 8;
FIG. 11 is an architectural diagram of the IOS operating system of FIG. 8.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the description of the present application, it is to be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. In the description of the present application, it is noted that, unless explicitly stated or limited otherwise, "including" and "having" and any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus. The specific meaning of the above terms in the present application can be understood in a specific case by those of ordinary skill in the art. Further, in the description of the present application, "a plurality" means two or more unless otherwise specified. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
In daily life, items are sought by means of user experience in a visual inspection manner. However, when the object is searched by visual inspection, the object is easily missed by human eyes, and the whole searching process is time-consuming; in another aspect. Finding items in a large area outdoors is also accompanied by unknown hazards, such as the presence of dangerous objects (beasts, dangerous plants), etc.
The present application will be described in detail with reference to specific examples.
In one embodiment, as shown in fig. 1, an information prompting method is specifically proposed, which can be implemented by means of a computer program and can be run on an information prompting device based on a von neumann architecture. The computer program may be integrated into the application or may run as a separate tool-like application. The information prompting device may be an AR display device.
The AR (Augmented Reality) technology is a technology that skillfully fuses virtual information and the real world, and virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer can be applied to the real world after analog simulation, thereby realizing 'augmentation' of the real world. AR display devices (e.g., AR glasses, AR helmets) using AR technology have been widely used in game entertainment, medical care, shopping, and education, and users can provide convenient services for daily life using the AR display devices.
Specifically, the information prompting method comprises the following steps:
step S101: acquiring a current environment image in a search mode aiming at a target object;
the target object is a target object to be searched which is determined before the AR display device starts a search mode; in some embodiments, the number of target objects includes a plurality of target objects, and the target objects may include, but are not limited to, a first object expected to be searched for with low risk and a second object expected to be avoided with high risk during the search process; further, the target object is usually determined before the search mode is turned on. In some embodiments, the user may determine only the first object on the AR display device, and the AR display device may intelligently match the second object based on the search scene corresponding to the first object, and if the first object is a herb, the second object with danger may be determined based on the herb search scene (usually a mountain area) corresponding to the first object. Further, the target object may also include only the first object desired by the user, such as an object input by the user as the target object.
In a specific implementation scenario, the AR display device may be applied in a drug administration scenario, where a drug to be searched (i.e. the first object) needs to be quickly searched, but since the search scenario is complex, animals and plants (i.e. the second object) with high risk may exist, a drug taker may execute the information prompting method of the present application by using the AR display device, and set a first object (e.g. a drug) with low risk to be searched and a second object with high risk to be avoided in the search process by presetting a target object, which aims to avoid a dangerous object while searching a desired object in time.
In a possible implementation manner, before the AR display device starts the search mode, the AR display device may first receive at least one reference object input by the user, establish an object search library based on the input "at least one reference object", and simultaneously acquire object feature information of each reference object when establishing the object search library, where the object feature information includes a fit of at least one or more of an image feature, an attribute feature, a description feature, and an odor feature; the object search library can be stored in different places, for example, the AR display device can store the object search library to the cloud server.
The AR display device may provide the at least one reference object, among which the AR display device may receive a selection operation for a target object and then turn on a search mode for the target object, to a user.
The search mode is associated with hardware and/or software functions of the AR display device, and after the search mode is started, the AR display device can start a corresponding sensor to acquire a current environment image;
furthermore, the environment image can be generated by adopting a multi-sensor information fusion technology, the multi-sensor information fusion technology can analyze and synthesize data information from different (information source) sensors so as to synthesize corresponding environment images, such as a thermal imaging source sensor, a three-dimensional sensor, a wide-angle sensor and the like, the quality of image information can be improved by fully utilizing high-speed operation of AR display equipment and complementation of multi-source information, and the information accuracy, reliability, completeness and the like of the environment images are obviously improved compared with any one of the sensors. Therefore, an environment image with better quality can be acquired so as to quickly identify the target object in the environment.
Step S102: carrying out object recognition on the environment image, and determining that the target object is recognized;
in a possible embodiment, the environment image usually contains a plurality of environment objects, such as scene objects, plant objects, animal objects, etc., and an environment object classifier may be created in advance, and the environment object classifier may be used to detect or classify the environment object tags of the input image, that is, to identify a plurality of environment objects in the environment image, and further to judge whether the environment image contains the target object. If a plurality of environment objects exist in the target object, the target object is determined to be identified.
The environment object classifier is a machine learning model (such as a neural network model) based on image classification training, and by inputting an image to the environment object classifier, the environment object classifier performs recognition and classification processing on the image and determines an environment object label for the image from a given environment object classification set.
After creating the environment object classifier, a large amount of image data is acquired, image sets respectively composed of image objects of different contents, such as an image set composed of landscape image objects, an image set composed of vegetation image objects, an image set composed of architectural image objects, and the like, are determined, and then, for each content image set, common features, i.e., feature vectors, between the images included in the image set are extracted. And then inputting the feature vector into the environment object classifier for training, calculating an expected error between an actual output value and an expected output value of the environment object classifier in the training process, adjusting parameters of the environment object classifier based on the expected error, and obtaining a trained generated environment object classifier after the training is finished.
Alternatively, in training the environment object classifier, a training method based on Dynamic Time Warping (DTW), a training method based on Vector Quantization (VQ), a training method based on a time series of image signals (HMM), or the like may be used.
Further, the environment object classifier may be a face detection method based on deep learning, such as a Cascade Convolutional Neural Network (Cascade Convolutional Neural Network, Cascade CNN), a fast region Convolutional Neural Network (fast Regions with CNN features, fast RCNN), a RetinaFa image detection model based on a traditional object detection Network (RetinaNet), and so on. Compared with the traditional image detection method (such as a rigid templates method), the features extracted by the deep neural network have stronger robustness and description capability.
Step S103: determining prompt information for the target object, generating a target image based on the prompt information and the environment image, and outputting the target image.
The prompt message is used for reminding the current user that the target object is identified, and in some embodiments, the prompt message comprises associated information for the target object, such as attribute characteristics, description characteristics and odor characteristics; in some embodiments, the prompt may be marking information (e.g., color marking, graphic marking) for the target object. In some embodiments, the hint information also includes current location information (e.g., bearing, distance) with respect to the target object in the environment.
Wherein, the prompt message contains data corresponding to various types of information sources, the information sources can be smell sources, namely after the object is identified,
in one possible implementation, the corresponding environment frame image when the target image is currently recognized may be acquired, and then the prompt information may be displayed on the frame environment image, for example, the prompt information may be displayed in the environment image, so that the target image displaying the prompt information may be synthesized on the environment image. And then the target image is output and displayed in a visual display area of the AR display device.
In a feasible implementation manner, the AR display device may capture real-time images of a real scene in front through the image capture device and display the real scene images in the visual display area, and may superimpose a virtual display medium (i.e., optical virtual prompt information) onto a real scene observed by a user through the AR display device by using real scene image data captured by the image capture device and using an optical display portion of the AR display device (e.g., AR glasses), so as to present the images in front of the user, i.e., target images corresponding to the real scene when the prompt information is superimposed on a current environmental image frame captured in real time, thereby achieving the purpose of virtual-real fusion. Further, in practical applications, a spatial position of the identified target object is determined based on an environment image (which refers to a target environment image frame corresponding to the target object when the target object is identified), a virtual position calculation function is invoked based on the spatial position to calculate display position information of prompt information (information to be superimposed) in a visual display area (used for displaying a currently acquired real scene image) of the AR display device, if the prompt information includes a mark frame for the target object, a first display position of the mark frame in the visual display area needs to be determined, if the prompt information includes introduction characters for the target object, a second display position of the introduction characters in the visual display area needs to be determined, and if the prompt information includes relative position information for the target object, a third display position of the relative position information in the visual display area needs to be determined. And superposing and displaying the real scene image and the prompt information acquired by the implementation on the visual display area based on the display position information so as to form a target image corresponding to the prompt information superposed on the current environment image frame acquired in real time, thereby achieving the purpose of virtual-real fusion and reminding a user to find a target object.
In the embodiment of the application, the AR display device can acquire the current environment image in a search mode aiming at the target object; carrying out object recognition on the environment image, and determining to recognize the target object; the AR display device may determine prompt information for the target object, generate a target image based on the prompt information and the environment image, and output the target image. The method can acquire and determine the related prompt information aiming at the target object when the target object is searched, thereby prompting the information in the searching process, improving the efficiency of searching the object without adopting a visual inspection mode with low efficiency by a user, facilitating the searching of the corresponding object of the object, acquiring the information (such as attribute, position and the like) of the target object after the target object is found, and prompting the user, thereby improving the convenience and intelligence in the searching process.
Referring to fig. 2, fig. 2 is a schematic flowchart of another embodiment of an information prompting method provided in the present application. Specifically, the method comprises the following steps:
step S201: and under a search mode aiming at the target object, acquiring the image characteristics of the target object, and carrying out image scanning on the current environment.
The image feature may be feature information of the target object, which is acquired by the AR display device when the target object is determined to be searched, and the AR display device may acquire the image feature (texture feature, color feature, structural feature, appearance feature, located environment feature, and the like) in an object search library.
In a possible implementation manner, after acquiring image features of the target object, the AR display device performs image-specific scanning based on the image features, and may periodically or in real time acquire at least one frame of reference image in the current environment during the scanning process, where the reference image may be understood as an environment image frame corresponding to a real scene extracted or intercepted during the image scanning process of the AR display device, and identify whether a reference image feature matching an image feature that is expected to be acquired exists in the reference image by identifying each frame of reference image; therefore, the pose of the target object is predicted in a targeted manner based on the feature matching result, that is, the pose of the target object with high probability is judged, for example, the pose information of the target object is predicted when the target object is possibly in a certain direction, position and direction.
The image feature matching may be predicted pose information (predicted orientation, predicted distance, predicted position, etc.) obtained by extracting an environment image feature from a reference image, calculating a matching degree between the reference environment image feature and the image feature, determining an image region with a high matching degree in the reference image, and predicting the pose of the target object in a targeted manner based on the image region.
The image feature matching process is carried out based on a pre-trained object recognition model, a reference image is input into the object recognition model, and an object recognition result is output, wherein the object recognition result can be the fitting of at least one or more of object orientation, object identification and object similarity;
an image area with high matching degree in the reference image can be determined based on the object matching result, and then the pose of the target object is predicted in a targeted manner based on the image area to obtain predicted pose information. It is to be noted that; further, in the present application, after determining the image area, the AR display device may start the position, distance, or position between the reference targets corresponding to the "image area with high matching degree" in the actual measurement environment such as the positioning component (e.g., the infrared positioning component, the thermal imaging component, or the three-dimensional positioning component) for the target object, so as to use the position, distance, or position as the predicted pose information.
Further, the AR display device adjusts current image scanning parameters based on the predicted pose information, wherein adjusting the image scanning parameters comprises adjusting at least one of an angle of a collected image, a position, a displacement, a focal length, a scanning frequency, a resolution and a lens multiple; and then carrying out image scanning on the current environment based on the adjusted image scanning parameters.
The image scanning parameters may be periodic, and an interval time is set, and the adjustment is performed at each interval time.
Step S202: acquiring an environment image aiming at a target object at present, carrying out object recognition on the environment image, and determining to recognize the target object;
in one possible embodiment, the environment image is input into an object recognition model, and an object recognition result is output, wherein the object recognition result includes but is not limited to at least one of object orientation, object identification and object similarity;
determining to identify the target object based on the object identification result, and if the object identification is the identification corresponding to the target object, determining to identify the target object; and if the object similarity is larger than a preset similarity threshold, determining that the target object is identified.
In practical application, the terminal may create an initial object recognition model, and may obtain all or part of sample images from an existing image database, and/or obtain sample images taken in an actual environment by using a device with a photographing function. The method comprises the steps of obtaining a large number of sample images, preprocessing the sample images, wherein the preprocessing comprises the processing processes of digitalization, geometric transformation, normalization, smoothing, restoration enhancement and the like, eliminating irrelevant information in the sample images, extracting image characteristics, inputting the image characteristics into an initial object recognition model, and training to obtain a trained object recognition model.
The terminal can acquire all or part of sample images from image databases such as a CUHKPQ atlas, an AVA atlas and the like.
Optionally, the object identification model may be implemented by fitting one or more of a Convolutional Neural Network (CNN) model, a Deep Neural Network (DNN) model, a Recurrent Neural Network (RNN), a model, an embedding (embedding) model, a Gradient Boosting Decision Tree (GBDT) model, a Logistic Regression (LR) model, and the like, and an error back propagation algorithm is introduced on the basis of the existing Neural Network model for optimization, so that the identification accuracy of the initial object identification model based on the Neural Network model can be improved.
Step S203: measuring relative position information with the target object, determining a danger index of the target object, and determining object association information for the target object based on the danger index.
Wherein the object associated information comprises at least one of attribute characteristic information, description characteristic information, smell characteristic information and operation characteristic information
Specifically, relative position information between the target object and the target object is measured, and object related information for the target object is acquired.
The relative position information may be understood as a relative position, a relative orientation, a relative distance, etc. between the AR display device and the target object, and may be generally characterized in terms of longitude and latitude, coordinates, directions, orientations, distances, etc.
Wherein the relative location information is determined based on a corresponding location acquisition technique, including but not limited to: wireless location (e.g., infrared measurements, ultrasonic measurements) techniques, sensor techniques, positional image processing techniques, and the like, wherein:
wireless location technologies include, but are not limited to: satellite positioning technology, infrared indoor positioning technology, ultrasonic positioning technology, Bluetooth technology, radio frequency identification technology, ultra wide band technology, ZigBee technology and the like;
the sensor technology is that the position of a target object is judged by using a sensor which can sense the position, such as a proximity sensor;
the image processing technology is to acquire position information and the like by performing an expected process on a position image captured by a camera.
In an embodiment of the application, the risk index is determined based on at least one data source of an object type, relative position information and an environment image of a target object;
optionally, the risk index model may be trained in advance, the output result of the preset risk index model may be related to labeling of the risk index value in the model training process, and corresponding labeling may be performed on data (object type, relative position information, environmental image, and the like of the target object) acquired in advance according to actual requirements in the model training stage. The labeling can be performed on the scene information and labeling parameter index values, and the labeling on the scene information can include whether the environment is indoor or outdoor, indoor environment information (such as a market, a dining room or a classroom and the like), whether stairs or elevators exist nearby, whether rivers, water pits, ditches, well covers or collapse places exist nearby in urban areas or in the wild, whether ambient light is insufficient, and other information which may threaten the safety of the current user and other information related to the environmental conditions which may threaten the safety of the current user. The information related to the environmental situation may include the location of obstacles, the number of obstacles, the type of obstacles, the size of obstacles, the distance between the obstacles and the user, the movement speed of the obstacles, etc., and the obstacles may include, for example, people, animals, trees, railings, poles, shoulders, ropes, boxes, stones, and other living things or objects that may pose a threat to the safety of the current user.
It can be understood that different danger indexes correspond to different types of associated information types, so as to obtain the object associated information based on all associated information types corresponding to the danger indexes, where the associated information types include one or more of object description, object attribute, object operation information, object danger level, danger avoiding operation, danger avoiding route, and the like. .
It can be understood that the lower the risk index is, the less the risk degree to the current AR display device is, that is, the different object association information is corresponding to.
Further, a mapping relation between a reference risk index and a reference associated information type is pre-stored, and after the risk index is determined, the associated information type corresponding to the risk index can be determined based on the mapping relation, so that the object associated information determined for the target object is obtained.
Step S204: and generating prompt information of the target object based on the relative position information and the object association information.
In the present application, the prompt information needs to be displayed (e.g., superimposed) in the environment image so as to synthesize the target object.
Therefore, when the prompt information is displayed, the prompt information needs to be displayed in the environment image according to a preset prompt display rule, such as that the target object cannot be shielded, the background of the prompt information is made transparent, the prompt information is displayed on the peripheral side of the target object, and the like.
And the AR display equipment performs information processing based on the relative position information and the object correlation information, so as to generate final prompt information which accords with prompt information display specifications.
Step S205: and generating a target image based on the prompt information and the environment image, and outputting the target image.
Specifically, refer to step S103, which is not described herein again.
Step S206: and if the danger index is larger than the index threshold value, carrying out danger reminding.
Specifically, the index threshold is set for the dangerous type object, and when the dangerous index is greater than the index threshold, the target object is a dangerous type object; in the embodiment of the present application, the execution form of the danger alert is not limited. Illustratively, one or more of a text alert, an audio alert, a vibration alert, an indicator light alert, a screen-off alert, and a forced exit from a current application, among others, may be included. For example, the words "dangerous ahead, please stop approaching" are displayed on the visual display area of the AR display device and the AR display device is controlled to vibrate to remind the user of danger. Optionally, the danger cause can be reminded. For example, a reminder "dangerous objects (such as beasts) are present in front, please pay attention to safety", etc.
In a specific embodiment, after determining that the risk index is greater than the index threshold, the AR display device may start a real-time monitoring function for the target object, and perform real-time positioning on the target object determined as the dangerous object through a sensor device included in the AR display device, that is, obtain a target position of the target object in real time, where the target position includes, but is not limited to, a direction, an angle, a coordinate, a relative distance, and the like, and display the target position in real time, for example, in a viewing angle display area corresponding to the AR display device; further, the AR display device may further obtain a danger avoiding operation for the target object, such as obtaining a defensive measure for the target object, such as obtaining an avoiding route for the target object, and then output the target location and the danger avoiding operation, such as performing a reminder by using a voice reminder, such as performing a reminder by using a text reminder, and the like.
In the embodiment of the application, the AR display device can acquire the current environment image in a search mode aiming at the target object; carrying out object recognition on the environment image, and determining to recognize the target object; the AR display device may determine prompt information for the target object, generate a target image based on the prompt information and the environment image, and output the target image. The method can acquire and determine the related prompt information aiming at the target object when the target object is searched, thereby prompting the information in the searching process, improving the efficiency of searching the object without adopting a visual inspection mode with low efficiency by a user, facilitating the searching of the corresponding object of the object, acquiring the information (such as attribute, position and the like) of the target object after the target object is found, and prompting the user, thereby improving the convenience and intelligence in the searching process.
The following are embodiments of the apparatus of the present application that may be used to perform embodiments of the method of the present application. For details which are not disclosed in the embodiments of the apparatus of the present application, reference is made to the embodiments of the method of the present application.
Please refer to fig. 3, which shows a schematic structural diagram of an information prompting device according to an exemplary embodiment of the present application. The information prompting device may be implemented as all or part of a device, in software, hardware, or a combination of both. The apparatus 1 comprises an image acquisition module 11, an object recognition module 12 and an image output module 13.
The image acquisition module 11 is configured to acquire a current environment image in a search mode for a target object;
the object recognition module 12 is configured to perform object recognition on the environment image, and determine that the target object is recognized;
an image output module 13, configured to determine prompt information for the target object, generate a target image based on the prompt information and the environment image, and output the target image.
Optionally, as shown in fig. 7, the apparatus 1 includes:
a characteristic obtaining module 14, configured to receive at least one input reference object, and obtain object characteristic information of the reference object;
and a mode opening module 15, configured to receive, in the at least one reference object, a selected operation for a target object, and open a search mode for the target object.
Optionally, as shown in fig. 4, the image capturing module 11 includes:
a feature acquisition unit 111 configured to acquire an image feature of the target object;
and an image acquisition unit 112, configured to perform image scanning on the current environment, and acquire an environment image of the current target object.
Optionally, as shown in fig. 5, the image capturing unit 112 includes:
a reference image collecting subunit 1121, configured to collect a reference image in the current environment based on the image feature corresponding to the target object;
an environment image acquiring subunit 1122, configured to determine predicted pose information of the target object based on the reference image, adjust current image scanning parameters based on the predicted pose information, and acquire an environment image of the target object.
Optionally, as shown in fig. 6, the object recognition module 12 includes:
a recognition result output unit 121 configured to input the environment image into an object recognition model, and output an object recognition result, which includes at least one of an object orientation, an object identifier, and an object similarity;
a target object determination unit 122, configured to determine that the target object is recognized based on the object recognition result.
Optionally, the object identification module 12 is specifically configured to:
measuring relative position information between the target object and the target object, and acquiring object association information aiming at the target object;
and generating prompt information of the target object based on the relative position information and the object association information.
Optionally, the object identification module 12 is specifically configured to:
the method comprises the steps of determining a danger index of the target object, and determining object associated information aiming at the target object based on the danger index, wherein the object associated information comprises at least one of attribute feature information, description feature information, smell feature information and operation feature information.
Optionally, the apparatus 1 is specifically configured to:
and if the danger index is larger than the index threshold value, carrying out danger reminding.
Optionally, the apparatus 1 is specifically configured to:
starting a real-time monitoring function aiming at the target object, acquiring a target position of the target object, and acquiring danger avoiding operation aiming at the target object;
outputting the target position and the hazard avoidance operation.
It should be noted that, when the information presentation apparatus provided in the foregoing embodiment executes the information presentation method, only the division of the functional modules is illustrated, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure of the device may be divided into different functional modules to complete all or part of the functions described above. In addition, the information prompting device and the information prompting method provided by the above embodiments belong to the same concept, and the detailed implementation process is described in the method embodiments, which is not described herein again.
The above-mentioned serial numbers of the embodiments of the present application are merely for description and do not represent the merits of the embodiments.
In the embodiment of the application, the AR display device can acquire the current environment image in a search mode aiming at the target object; carrying out object recognition on the environment image, and determining to recognize the target object; the AR display device may determine prompt information for the target object, generate a target image based on the prompt information and the environment image, and output the target image. The method can acquire and determine the related prompt information aiming at the target object when the target object is searched, thereby prompting the information in the searching process, improving the efficiency of searching the object without adopting a visual inspection mode with low efficiency by a user, facilitating the searching of the corresponding object of the object, acquiring the information (such as attribute, position and the like) of the target object after the target object is found, and prompting the user, thereby improving the convenience and intelligence in the searching process.
An embodiment of the present application further provides a computer storage medium, where the computer storage medium may store a plurality of instructions, and the instructions are suitable for being loaded by a processor and executing the information prompting method according to the embodiment shown in fig. 1 to fig. 2, and a specific execution process may refer to specific descriptions of the embodiment shown in fig. 1 to fig. 2, which is not described herein again.
The present application further provides a computer program product, where at least one instruction is stored in the computer program product, and the at least one instruction is loaded by the processor and executes the information prompting method according to the embodiment shown in fig. 1 to fig. 2, where a specific execution process may refer to specific descriptions of the embodiment shown in fig. 1 to fig. 2, and is not described herein again.
Referring to fig. 8, a block diagram of an electronic device according to an exemplary embodiment of the present application is shown. The electronic device in the present application may comprise one or more of the following components: a processor 110, a memory 120, an input device 130, an output device 140, and a bus 150. The processor 110, memory 120, input device 130, and output device 140 may be connected by a bus 150.
The Memory 120 may include a Random Access Memory (RAM) or a read-only Memory (ROM). Optionally, the memory 120 includes a non-transitory computer-readable medium. The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like, and the operating system may be an Android (Android) system, including a system based on Android system depth development, an IOS system developed by apple, including a system based on IOS system depth development, or other systems. The data storage area may also store data created by the electronic device during use, such as phone books, audio and video data, chat log data, and the like.
Referring to fig. 9, the memory 120 may be divided into an operating system space, in which an operating system runs, and a user space, in which native and third-party applications run. In order to ensure that different third-party application programs can achieve a better operation effect, the operating system allocates corresponding system resources for the different third-party application programs. However, the requirements of different application scenarios in the same third-party application program on system resources are different, for example, in a local resource loading scenario, the third-party application program has a higher requirement on the disk reading speed; in the animation rendering scene, the third-party application program has a high requirement on the performance of the GPU. The operating system and the third-party application program are independent from each other, and the operating system cannot sense the current application scene of the third-party application program in time, so that the operating system cannot perform targeted system resource adaptation according to the specific application scene of the third-party application program.
In order to enable the operating system to distinguish a specific application scenario of the third-party application program, data communication between the third-party application program and the operating system needs to be opened, so that the operating system can acquire current scenario information of the third-party application program at any time, and further perform targeted system resource adaptation based on the current scenario.
Taking an operating system as an Android system as an example, programs and data stored in the memory 120 are as shown in fig. 10, and a Linux kernel layer 320, a system runtime library layer 340, an application framework layer 360, and an application layer 380 may be stored in the memory 120, where the Linux kernel layer 320, the system runtime library layer 340, and the application framework layer 360 belong to an operating system space, and the application layer 380 belongs to a user space. The Linux kernel layer 320 provides underlying drivers for various hardware of the electronic device, such as a display driver, an audio driver, a camera driver, a bluetooth driver, a Wi-Fi driver, power management, and the like. The system runtime library layer 340 provides a main feature support for the Android system through some C/C + + libraries. For example, the SQLite library provides support for a database, the OpenGL/ES library provides support for 3D drawing, the Webkit library provides support for a browser kernel, and the like. Also provided in the system runtime library layer 340 is an Android runtime library (Android runtime), which mainly provides some core libraries that can allow developers to write Android applications using the Java language. The application framework layer 360 provides various APIs that may be used in building an application, and developers may build their own applications by using these APIs, such as activity management, window management, view management, notification management, content provider, package management, session management, resource management, and location management. At least one application program runs in the application layer 380, and the application programs may be native application programs carried by the operating system, such as a contact program, a short message program, a clock program, a camera application, and the like; or a third-party application developed by a third-party developer, such as a game application, an instant messaging program, a photo beautification program, an information prompt program, and the like.
Taking an operating system as an IOS system as an example, programs and data stored in the memory 120 are shown in fig. 11, and the IOS system includes: a Core operating system Layer 420(Core OS Layer), a Core Services Layer 440(Core Services Layer), a Media Layer 460(Media Layer), and a touchable Layer 480(Cocoa Touch Layer). The kernel operating system layer 420 includes an operating system kernel, drivers, and underlying program frameworks that provide functionality closer to hardware for use by program frameworks located in the core services layer 440. The core services layer 440 provides system services and/or program frameworks, such as a Foundation framework, an account framework, an advertisement framework, a data storage framework, a network connection framework, a geographic location framework, a motion framework, and so forth, as required by the application. The media layer 460 provides audiovisual related interfaces for applications, such as graphics image related interfaces, audio technology related interfaces, video technology related interfaces, audio video transmission technology wireless playback (AirPlay) interfaces, and the like. Touchable layer 480 provides various common interface-related frameworks for application development, and touchable layer 480 is responsible for user touch interaction operations on the electronic device. Such as a local notification service, a remote push service, an advertising framework, a game tool framework, a messaging User Interface (UI) framework, a User Interface UIKit framework, a map framework, and so forth.
In the framework illustrated in FIG. 11, the framework associated with most applications includes, but is not limited to: a base framework in the core services layer 440 and a UIKit framework in the touchable layer 480. The base framework provides many basic object classes and data types, provides the most basic system services for all applications, and is UI independent. While the class provided by the UIKit framework is a basic library of UI classes for creating touch-based user interfaces, iOS applications can provide UIs based on the UIKit framework, so it provides an infrastructure for applications for building user interfaces, drawing, processing and user interaction events, responding to gestures, and the like.
The Android system can be referred to as a mode and a principle for realizing data communication between the third-party application program and the operating system in the IOS system, and details are not repeated herein.
In the embodiment of the present application, the main body of execution of each step may be the apparatus described above. Alternatively, the execution subject of each step may be an operating system of the apparatus. The operating system may be an android system, an IOS system, or another operating system, which is not limited in this embodiment of the present application.
The input device 130 is used for receiving input instructions or data, and the input device 130 includes, but is not limited to, a keyboard, a mouse, a camera, a microphone, or a touch device. The output device 140 is used for outputting instructions or data, and the output device 140 includes, but is not limited to, a display device, a speaker, and the like. In one example, the input device 130 and the output device 140 may be combined, and the input device 130 and the output device 140 are touch display screens for receiving touch operations of a user on or near the touch display screens by using any suitable object such as a finger, a touch pen, and the like, and displaying user interfaces of various applications. Touch displays are typically provided on the front panel of an electronic device. The touch display screen may be designed as a full-face screen, a curved screen, or a profiled screen. The touch display screen can also be designed to be a combination of a full-face screen and a curved-face screen, and a combination of a special-shaped screen and a curved-face screen, which is not limited in the embodiment of the present application.
In addition, those skilled in the art will appreciate that the configurations of the electronic devices illustrated in the above-described figures do not constitute limitations on the electronic devices, which may include more or fewer components than illustrated, or some components may be combined, or a different arrangement of components. For example, the electronic device further includes a radio frequency circuit, an input unit, a sensor, an audio circuit, a wireless fidelity (WiFi) module, a power supply, a bluetooth module, and other components, which are not described herein again.
In the embodiment of the present application, the main body of execution of each step may be the electronic device described above. Optionally, the execution subject of each step is an operating system of the electronic device. The operating system may be an android system, an IOS system, or another operating system, which is not limited in this embodiment of the present application.
The electronic device of the embodiment of the application can also be provided with a display device, and the display device can be various devices capable of realizing a display function, for example: a cathode ray tube display (CR), a light-emitting diode display (LED), an electronic ink panel, a Liquid Crystal Display (LCD), a Plasma Display Panel (PDP), and the like. A user may utilize a display device on the electronic device 101 to view information such as displayed text, images, video, and the like. The electronic device may be a smartphone, a tablet computer, a gaming device, an AR (Augmented Reality) device, an automobile, a data storage device, an audio playback device, a video playback device, a notebook, a desktop computing device, a wearable device such as an electronic watch, an electronic glasses, an electronic helmet, an electronic bracelet, an electronic necklace, an electronic garment, or the like.
In the electronic device shown in fig. 8, where the electronic device may be a terminal, the processor 110 may be configured to call the information prompting application stored in the memory 120, and specifically perform the following operations:
acquiring a current environment image in a search mode aiming at a target object;
carrying out object recognition on the environment image, and determining that the target object is recognized;
determining prompt information for the target object, generating a target image based on the prompt information and the environment image, and outputting the target image.
In one embodiment, the processor 110 further performs the following operations before performing the acquiring the current environment image in the search mode for the target object:
receiving at least one input reference object, and acquiring object characteristic information of the reference object;
receiving a selected operation aiming at a target object in the at least one reference object, and starting a searching mode aiming at the target object.
In one embodiment, the processor 110 specifically performs the following operations when executing the acquiring of the current environment image:
acquiring image characteristics of the target object;
and carrying out image scanning on the current environment, and acquiring an environment image aiming at the target object currently.
In one embodiment, when performing the image scanning on the current environment and acquiring the environment image of the current target object, the processor 110 specifically performs the following operations:
acquiring a reference image in the current environment based on the image characteristics corresponding to the target object;
and determining the predicted pose information of the target object based on the reference image, adjusting the current image scanning parameters based on the predicted pose information, and acquiring the current environment image aiming at the target object.
In one embodiment, when performing the object recognition on the environment image and determining that the target object is recognized, the processor 110 specifically performs the following operations:
inputting the environment image into an object recognition model, and outputting an object recognition result, wherein the object recognition result comprises at least one of an object orientation, an object identification and an object similarity;
determining that the target object is recognized based on the object recognition result.
In an embodiment, when the processor 110 determines the prompt information for the target object, the following operations are specifically performed:
measuring relative position information between the target object and the target object, and acquiring object association information aiming at the target object;
and generating prompt information of the target object based on the relative position information and the object association information.
In an embodiment, when the processor 110 performs the acquiring of the object association information for the target object, the following operations are specifically performed:
the method comprises the steps of determining a danger index of the target object, and determining object associated information aiming at the target object based on the danger index, wherein the object associated information comprises at least one of attribute feature information, description feature information, smell feature information and operation feature information.
In one embodiment, the processor 110 in performing the method further comprises:
and if the danger index is larger than the index threshold value, carrying out danger reminding.
In an embodiment, when executing the performing of the danger alert, the processor 110 specifically executes the following operations:
starting a real-time monitoring function aiming at the target object, acquiring a target position of the target object, and acquiring danger avoiding operation aiming at the target object;
outputting the target position and the hazard avoidance operation.
In the embodiment of the application, the AR display device can acquire the current environment image in a search mode aiming at the target object; carrying out object recognition on the environment image, and determining to recognize the target object; the AR display device may determine prompt information for the target object, generate a target image based on the prompt information and the environment image, and output the target image. The method can acquire and determine the related prompt information aiming at the target object when the target object is searched, thereby prompting the information in the searching process, improving the efficiency of searching the object without adopting a visual inspection mode with low efficiency by a user, facilitating the searching of the corresponding object of the object, acquiring the information (such as attribute, position and the like) of the target object after the target object is found, and prompting the user, thereby improving the convenience and intelligence in the searching process.
It is clear to a person skilled in the art that the solution of the present application can be implemented by means of software and/or hardware. The "unit" and "module" in this specification refer to software and/or hardware that can perform a specific function independently or in cooperation with other components, where the hardware may be, for example, a Field-ProgrammaBLE Gate Array (FPGA), an Integrated Circuit (IC), or the like.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one type of division of logical functions, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some service interfaces, devices or units, and may be an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by a program, which is stored in a computer-readable memory, and the memory may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The above description is only an exemplary embodiment of the present disclosure, and the scope of the present disclosure should not be limited thereby. That is, all equivalent changes and modifications made in accordance with the teachings of the present disclosure are intended to be included within the scope of the present disclosure. Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
Claims (11)
1. An information prompting method is applied to an AR display device, and comprises the following steps:
acquiring a current environment image in a search mode aiming at a target object;
carrying out object recognition on the environment image, and determining that the target object is recognized;
determining prompt information for the target object, generating a target image based on the prompt information and the environment image, and outputting the target image.
2. The method of claim 1, wherein before acquiring the current environment image in the target object seeking mode, the method comprises:
receiving at least one input reference object, and acquiring object characteristic information of the reference object;
receiving a selected operation aiming at a target object in the at least one reference object, and starting a searching mode aiming at the target object.
3. The method of claim 1, wherein said acquiring a current environmental image comprises:
acquiring image characteristics of the target object;
and carrying out image scanning on the current environment, and acquiring an environment image aiming at the target object currently.
4. The method of claim 3, wherein the image scanning the current environment, acquiring an image of the environment currently directed to the target object, comprises:
acquiring a reference image in the current environment based on the image characteristics corresponding to the target object;
and determining the predicted pose information of the target object based on the reference image, adjusting the current image scanning parameters based on the predicted pose information, and acquiring the current environment image aiming at the target object.
5. The method of claim 1, wherein the performing object recognition on the environment image and determining that the target object is recognized comprises:
inputting the environment image into an object recognition model, and outputting an object recognition result, wherein the object recognition result comprises at least one of an object orientation, an object identification and an object similarity;
determining that the target object is recognized based on the object recognition result.
6. The method of claim 1, wherein the determining the prompt information for the target object comprises:
measuring relative position information between the target object and the target object, and acquiring object association information aiming at the target object;
and generating prompt information of the target object based on the relative position information and the object association information.
7. The method of claim 6, wherein the obtaining object association information for the target object comprises:
the method comprises the steps of determining a danger index of the target object, and determining object associated information aiming at the target object based on the danger index, wherein the object associated information comprises at least one of attribute feature information, description feature information, smell feature information and operation feature information.
8. The method of claim 7, further comprising:
and if the danger index is larger than the index threshold value, carrying out danger reminding.
9. The method of claim 8, wherein said performing a hazard reminder comprises:
starting a real-time monitoring function aiming at the target object, acquiring a target position of the target object, and acquiring danger avoiding operation aiming at the target object;
outputting the target position and the hazard avoidance operation.
10. A computer storage medium, characterized in that it stores a plurality of instructions adapted to be loaded by a processor and to carry out the method steps according to any one of claims 1 to 9.
11. An electronic device, comprising: a processor and a memory; wherein the memory stores a computer program adapted to be loaded by the processor and to perform the method steps of any of claims 1 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011547756.XA CN112733620A (en) | 2020-12-23 | 2020-12-23 | Information prompting method and device, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011547756.XA CN112733620A (en) | 2020-12-23 | 2020-12-23 | Information prompting method and device, storage medium and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112733620A true CN112733620A (en) | 2021-04-30 |
Family
ID=75605157
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011547756.XA Withdrawn CN112733620A (en) | 2020-12-23 | 2020-12-23 | Information prompting method and device, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112733620A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113255644A (en) * | 2021-05-10 | 2021-08-13 | 青岛海信移动通信技术股份有限公司 | Display device and image recognition method thereof |
CN113705447A (en) * | 2021-08-27 | 2021-11-26 | 深圳市商汤科技有限公司 | Picture display method and device, electronic equipment and storage medium |
CN114254286A (en) * | 2021-11-15 | 2022-03-29 | 阿里巴巴(中国)有限公司 | Data security prevention and control method and system and AR glasses |
CN115242923A (en) * | 2022-07-28 | 2022-10-25 | 腾讯科技(深圳)有限公司 | Data processing method, device, equipment and readable storage medium |
CN115346333A (en) * | 2022-07-12 | 2022-11-15 | 北京声智科技有限公司 | Information prompting method and device, AR glasses, cloud server and storage medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120024073A (en) * | 2010-09-03 | 2012-03-14 | 주식회사 팬택 | Apparatus and method for providing augmented reality using object list |
CN103968824A (en) * | 2013-01-28 | 2014-08-06 | 华为终端有限公司 | Method for discovering augmented reality target, and terminal |
CN106228127A (en) * | 2016-07-18 | 2016-12-14 | 乐视控股(北京)有限公司 | Indoor orientation method and device |
CN108647581A (en) * | 2018-04-18 | 2018-10-12 | 深圳市商汤科技有限公司 | Information processing method, device and storage medium |
CN109324693A (en) * | 2018-12-04 | 2019-02-12 | 塔普翊海(上海)智能科技有限公司 | AR searcher, the articles search system and method based on AR searcher |
CN110191316A (en) * | 2019-05-20 | 2019-08-30 | 联想(上海)信息技术有限公司 | A kind of information processing method and device, equipment, storage medium |
CN110436082A (en) * | 2019-08-08 | 2019-11-12 | 上海萃钛智能科技有限公司 | A kind of Intelligent refuse classification identification suggestion device, system and method |
CN112052784A (en) * | 2020-09-02 | 2020-12-08 | 腾讯科技(深圳)有限公司 | Article searching method, device, equipment and computer readable storage medium |
-
2020
- 2020-12-23 CN CN202011547756.XA patent/CN112733620A/en not_active Withdrawn
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20120024073A (en) * | 2010-09-03 | 2012-03-14 | 주식회사 팬택 | Apparatus and method for providing augmented reality using object list |
CN103968824A (en) * | 2013-01-28 | 2014-08-06 | 华为终端有限公司 | Method for discovering augmented reality target, and terminal |
CN106228127A (en) * | 2016-07-18 | 2016-12-14 | 乐视控股(北京)有限公司 | Indoor orientation method and device |
CN108647581A (en) * | 2018-04-18 | 2018-10-12 | 深圳市商汤科技有限公司 | Information processing method, device and storage medium |
CN109324693A (en) * | 2018-12-04 | 2019-02-12 | 塔普翊海(上海)智能科技有限公司 | AR searcher, the articles search system and method based on AR searcher |
CN110191316A (en) * | 2019-05-20 | 2019-08-30 | 联想(上海)信息技术有限公司 | A kind of information processing method and device, equipment, storage medium |
CN110436082A (en) * | 2019-08-08 | 2019-11-12 | 上海萃钛智能科技有限公司 | A kind of Intelligent refuse classification identification suggestion device, system and method |
CN112052784A (en) * | 2020-09-02 | 2020-12-08 | 腾讯科技(深圳)有限公司 | Article searching method, device, equipment and computer readable storage medium |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113255644A (en) * | 2021-05-10 | 2021-08-13 | 青岛海信移动通信技术股份有限公司 | Display device and image recognition method thereof |
CN113255644B (en) * | 2021-05-10 | 2023-01-17 | 青岛海信移动通信技术股份有限公司 | Display device and image recognition method thereof |
CN113705447A (en) * | 2021-08-27 | 2021-11-26 | 深圳市商汤科技有限公司 | Picture display method and device, electronic equipment and storage medium |
CN114254286A (en) * | 2021-11-15 | 2022-03-29 | 阿里巴巴(中国)有限公司 | Data security prevention and control method and system and AR glasses |
CN115346333A (en) * | 2022-07-12 | 2022-11-15 | 北京声智科技有限公司 | Information prompting method and device, AR glasses, cloud server and storage medium |
CN115242923A (en) * | 2022-07-28 | 2022-10-25 | 腾讯科技(深圳)有限公司 | Data processing method, device, equipment and readable storage medium |
CN115242923B (en) * | 2022-07-28 | 2024-07-23 | 腾讯科技(深圳)有限公司 | Data processing method, device, equipment and readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111652678B (en) | Method, device, terminal, server and readable storage medium for displaying article information | |
CN112733620A (en) | Information prompting method and device, storage medium and electronic equipment | |
US11263824B2 (en) | Method and system to generate authoring conditions for digital content in a mixed reality environment | |
US20240104815A1 (en) | Augmented expression system | |
CN109688451B (en) | Method and system for providing camera effect | |
US20240153049A1 (en) | Location mapping for large scale augmented-reality | |
US11880923B2 (en) | Animated expressive icon | |
JP2013527947A (en) | Intuitive computing method and system | |
JP2013522938A (en) | Intuitive computing method and system | |
JP7421010B2 (en) | Information display method, device and storage medium | |
CN112070901A (en) | AR scene construction method and device for garden, storage medium and terminal | |
CN116841391A (en) | Digital human interaction control method, device, electronic equipment and storage medium | |
CN116343350A (en) | Living body detection method and device, storage medium and electronic equipment | |
CN111507143B (en) | Expression image effect generation method and device and electronic equipment | |
Verma et al. | Digital assistant with augmented reality | |
EP3385869B1 (en) | Method and apparatus for presenting multimedia information | |
CN114332351B (en) | Mouse motion reconstruction method and device based on multi-view camera | |
Shasha et al. | Object Recognition of Environmental Information in the Internet of Things Based on Augmented Reality | |
US11863863B2 (en) | System and method for frustum context aware digital asset suggestions | |
RU2817182C1 (en) | Information display method, device and data medium | |
US20240020920A1 (en) | Incremental scanning for custom landmarkers | |
Piechaczek et al. | Popular strategies and methods for using augmented reality | |
Kokorogianni et al. | AUGMENTED REALITY-ASSISTED NAVIGATION IN A UNIVERSITY CAMPUS | |
Wangi et al. | Augmented Reality Technology for Increasing the Understanding of Traffic Signs | |
CN115798057A (en) | Image processing method and device, storage medium and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20210430 |