CN110458052B - Target object identification method, device, equipment and medium based on augmented reality - Google Patents

Target object identification method, device, equipment and medium based on augmented reality Download PDF

Info

Publication number
CN110458052B
CN110458052B CN201910678874.5A CN201910678874A CN110458052B CN 110458052 B CN110458052 B CN 110458052B CN 201910678874 A CN201910678874 A CN 201910678874A CN 110458052 B CN110458052 B CN 110458052B
Authority
CN
China
Prior art keywords
target object
type
augmented reality
feature
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910678874.5A
Other languages
Chinese (zh)
Other versions
CN110458052A (en
Inventor
刘幕俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910678874.5A priority Critical patent/CN110458052B/en
Publication of CN110458052A publication Critical patent/CN110458052A/en
Application granted granted Critical
Publication of CN110458052B publication Critical patent/CN110458052B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure relates to a target object identification method and device based on augmented reality, electronic equipment and a storage medium, and relates to the technical field of augmented reality, wherein the method comprises the following steps: acquiring first type features of a plurality of objects in a visual field range through augmented reality equipment, and determining an object to be selected according to the first type features; comparing the second type characteristics of the object to be selected with the second type characteristics of the target object to obtain a comparison result; and if the object to be selected is determined to be the target object according to the comparison result, sending the information of the target object to a terminal and executing shortcut operation. The method and the device can accurately identify the target object, improve identification efficiency and accuracy, and increase operation convenience.

Description

Target object identification method, device, equipment and medium based on augmented reality
Technical Field
The present disclosure relates to the field of augmented reality technologies, and in particular, to a method for identifying an augmented reality target object, an apparatus for identifying an augmented reality target object, an electronic device, and a computer-readable storage medium.
Background
The masses can assist the police in finding a particular type of character. In the related art, when assisting police to search for a special type of person, people generally compare a released photo and other identity information with those seen by the people, so as to determine whether the person is the person to be searched. In the above method, problems of missing recognition and inaccurate recognition may be caused due to inaccurate memory of the masses or the fact that a special type of people pass through camouflage, and the efficiency of recognition by a manual method is low. In addition, if a person of a special type is found, when an alarm is given or a person is notified, a call needs to be made manually, which is time-consuming, inconvenient and incapable of protecting the safety of the masses.
It is noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure and therefore may include information that does not constitute prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
The present disclosure is directed to a method and apparatus for identifying a target object based on augmented reality, an electronic device, and a storage medium, which overcome, at least to some extent, the problem of inaccurate identification result due to limitations and disadvantages of the related art.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to an aspect of the present disclosure, there is provided an augmented reality-based target object recognition method, including: acquiring first type features of a plurality of objects in a visual field range through augmented reality equipment, and determining an object to be selected according to the first type features; comparing the second type characteristic of the object to be selected with the second type characteristic of the target object to obtain a comparison result; and if the object to be selected is determined to be the target object according to the comparison result, sending the information of the target object to a terminal and executing shortcut operation.
In an exemplary embodiment of the present disclosure, acquiring, by an augmented reality device, first type features of a plurality of objects in a field of view, and determining a candidate object according to the first type features includes: acquiring, by a camera in the augmented reality device, images of the plurality of objects within the field of view of the augmented reality device; extracting the first type of feature from the image, the first type of feature comprising a facial feature; matching the first type features of the plurality of objects with the first type features of the target object to determine a matching result; and taking the object corresponding to the first type feature with the matching result of successful matching as the object to be selected.
In an exemplary embodiment of the present disclosure, matching the first type features of the plurality of objects with the first type features of the target object to determine a matching result includes: calculating a similarity between the first type features of the plurality of objects and the first type features of the target object; and if the similarity meets a threshold condition, determining that the matching result is successful.
In an exemplary embodiment of the present disclosure, when determining the candidate object, the method further includes: and locking the object to be selected, and providing prompt information for prompting the object to be selected.
In an exemplary embodiment of the present disclosure, determining that the object to be selected is the target object according to the comparison result includes: if the second type feature of the object to be selected is consistent with the second type feature of the target object, determining that the object to be selected is the target object; and if the second type characteristics of the object to be selected are not consistent with the second type characteristics of the target object, determining that the object to be selected does not belong to the target object.
In an exemplary embodiment of the present disclosure, transmitting the information of the target object to a terminal includes: and sending the position information of the target object and the information of the visual field range corresponding to the target object to the terminal.
In an exemplary embodiment of the present disclosure, performing the shortcut operation includes: and responding to the trigger operation of the augmented reality equipment, and executing the shortcut operation, wherein the shortcut operation comprises a shortcut alarm operation and/or a shortcut notification operation.
According to an aspect of the present disclosure, there is provided an augmented reality-based target object recognition apparatus including: the device comprises a to-be-selected object determining module, a selecting module and a selecting module, wherein the to-be-selected object determining module is used for acquiring first type characteristics of a plurality of objects in a visual field range through augmented reality equipment and determining the to-be-selected objects according to the first type characteristics; the characteristic comparison module is used for comparing the second type characteristic of the object to be selected with the second type characteristic of the target object to obtain a comparison result; and the target determining module is used for sending the information of the target object to a terminal and executing shortcut operation if the object to be selected is determined to be the target object according to the comparison result.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform any one of the above augmented reality based target object recognition methods via execution of the executable instructions.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the augmented reality based target object recognition method of any one of the above.
In the augmented reality-based target object identification method, the augmented reality-based target object identification apparatus, the electronic device, and the computer-readable storage medium provided in the exemplary embodiments of the present disclosure, the object to be selected is determined by the first type of features acquired by the augmented reality device, and the target object is determined in the object to be selected according to the second type of features. On one hand, the object to be selected can be determined according to the first type features of the objects in the field of view of the augmented reality device, the target object can be determined from the object to be selected according to the second type features, and whether a certain object to be selected is the target object can be determined by simultaneously screening the first type features and the second type features. On the other hand, the augmented reality device can automatically judge whether the target object is the target object according to the first type feature and the second type feature without manually executing any operation, so that the recognition efficiency is improved. On the other hand, the information of the target object can be sent to the terminal to carry out shortcut operation, so that convenience of shortcut operation is improved, and safety of a user can be guaranteed.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It should be apparent that the drawings in the following description are merely examples of the disclosure and that other drawings may be derived by those of ordinary skill in the art without inventive effort.
Fig. 1 is a system architecture diagram illustrating an augmented reality-based target object recognition method to which the present exemplary embodiment is applied.
Fig. 2 schematically illustrates a schematic diagram of a target object identification method based on augmented reality in an exemplary embodiment of the present disclosure.
Fig. 3 schematically illustrates a schematic diagram of determining a candidate object in an exemplary embodiment of the present disclosure.
Fig. 4 schematically illustrates a diagram of determining a matching result in an exemplary embodiment of the present disclosure.
Fig. 5 schematically illustrates a block diagram of an augmented reality based target object recognition apparatus in an exemplary embodiment of the present disclosure.
Fig. 6 schematically illustrates a block diagram of an electronic device in an exemplary embodiment of the disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
Fig. 1 is a schematic diagram illustrating a system architecture of an exemplary application environment to which the augmented reality-based target object recognition method and apparatus according to the present exemplary embodiment may be applied.
As shown in fig. 1, the system architecture 100 may include one or more of the devices 101, 102, 103, a network 104 and a server 105 and terminal 106. The network 104 is the medium used to provide communication links between the devices 101, 102, 103 and the server 105 and the terminal 106. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few. The devices 101, 102, 103 may be various augmented reality devices (e.g., augmented reality glasses, augmented reality helmets, etc.) or terminal devices capable of acquiring and comparing features, etc. It should be understood that the number of devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of devices, networks, and servers, as desired for an implementation. For example, server 105 may be a server cluster comprised of multiple servers, or the like. The terminal 106 may be a terminal for receiving information, such as a computer or a mobile phone.
In the present exemplary embodiment, the devices 101, 102, and 103 extract features of a plurality of objects within the field of view, and compare the features of target objects existing in the database to obtain target objects; after receiving the information of the target object sent by the devices 101, 102, 103, the server 105 may forward the information to the terminal 106, and may perform subsequent processing such as alarm according to the information of the target object. The terminal 106 may receive the information of the target object and perform subsequent processing.
The target object identification method provided in the present exemplary embodiment is generally executed by the devices 101, 102, and 103, but it is easily understood by those skilled in the art that the target object identification method provided in the present exemplary embodiment may also be executed by a server, and this is not particularly limited herein.
The present exemplary embodiment first provides a target object identification method based on augmented reality, which may be applied to a target scene for identifying and searching a suspect, a missing person, and other special persons. Referring to fig. 2, the method includes steps S210, S220, and S230, wherein:
in step S210, first type features of a plurality of objects in a field of view are acquired by an augmented reality device, and an object to be selected is determined according to the first type features;
in step S220, comparing the second type feature of the object to be selected with the second type feature of the target object to obtain a comparison result;
in step S230, if it is determined that the object to be selected is the target object according to the comparison result, the information of the target object is sent to a terminal, and a shortcut operation is executed.
In the augmented reality-based target object identification method provided in the exemplary embodiment of the present disclosure, on one hand, since the object to be selected may be determined according to the first type features of the multiple objects within the field of view of the augmented reality device, and the target object may be determined from the object to be selected according to the second type features, whether a certain object to be selected is the target object may be determined by simultaneously screening the first type features and the second type features, and since the first type features and the second type features may accurately and comprehensively reflect the features of the object, and through two layers of screening and determination, the limitation of only performing rough identification according to manual work of a photo is avoided, and the comprehensiveness and accuracy of identification are improved. On the other hand, the augmented reality device can automatically judge whether the target object is the target object according to the first type feature and the second type feature without manually executing any operation, so that the recognition efficiency is improved. On the other hand, the information of the target object can be sent to the terminal to carry out quick operation, so that the operation convenience is improved, and the safety of a user can be ensured.
Next, a target object identification method in the present exemplary embodiment is specifically described with reference to the drawings.
In step S210, first type features of a plurality of objects in a field of view are acquired by an augmented reality device, and an object to be selected is determined according to the first type features.
In this exemplary embodiment, the augmented reality device may be augmented reality glasses, an augmented reality helmet, or the like, or may also be a terminal installed with an application program of an augmented reality type, and the augmented reality glasses are taken as an example in this exemplary embodiment for description. The Augmented Reality glasses may be monocular AR (Augmented Reality) smart glasses or binocular AR smart glasses, and may specifically include, but are not limited to, a head-mounted display, a built-in camera, a sensor, a controller, a touch sensor and a key, and the like. The user wears the augmented reality device at the eye position, looks over the surrounding environment through this augmented reality device. The field of view of the augmented reality device refers to the field of view that the user's eyes can see and may vary according to the change in the user's position. The field of view of the augmented reality device may be the angular extent of the device in both the horizontal and vertical directions. Specifically, an azimuth angle and a pitch angle of the augmented reality device are obtained; determining an angle range of the augmented reality device in the horizontal direction according to the azimuth angle of the augmented reality device; determining an angle range of the augmented reality device in the vertical direction according to the pitch angle of the augmented reality device; and determining the angle ranges of the horizontal direction and the vertical direction as the visual field range of the augmented reality device.
The plurality of objects may be all objects, e.g. all people, within the field of view of the augmented reality device. The plurality of objects may also be part of characters in the same category as the target object (for example, when the target object is a child, the plurality of objects may be children in the visual field range), so as to reduce the amount of calculation. The first type of feature may be, for example, facial features or eye features of a plurality of objects, and the facial features are taken as an example for illustration. Further, after the first type features are obtained, the first type features of the plurality of objects may be compared with the target objects in the database one by one, so as to accurately determine the object to be selected. The candidate object may be one of a plurality of objects in the field of view or a plurality of objects associated with or similar to the target object.
It should be noted that, first, the police may build a database entry of the target object or the AR glasses device may build a database entry of the target object. Further, the police uploads the feature data of the target object, or the family members upload the feature data of the target object, so that the augmented reality device can determine the object to be selected. The feature data herein includes, but is not limited to, a first type of feature and a second type of feature.
Fig. 3 schematically shows a flowchart for determining a candidate object, and referring to fig. 3, the method mainly includes steps S310 to S340, and steps S310 to S340 are specific implementation manners of step S210, where:
in step S310, images of the plurality of objects within the field of view of the augmented reality device are captured by a camera in the augmented reality device.
In this exemplary embodiment, a camera may be disposed in the augmented reality device, and is used to acquire images of a plurality of objects within a field of view of the augmented reality device. The image may be a single image or a video as long as a plurality of objects are included. It should be noted that the augmented reality device can automatically acquire images of a plurality of objects to improve timeliness; the power consumption may be reduced by collecting only when the photographing trigger operation is received, and this is not particularly limited. The photographing triggering operation may be a sound or a gesture for triggering photographing, or the like.
In step S320, the first type features are extracted from the image, the first type features including facial features.
In the present exemplary embodiment, after the image is acquired, the features of the image may be extracted. In particular, the images of the plurality of objects may be feature extracted by any suitable machine learning model or deep learning model to obtain the first type features of each image. The machine learning model includes, but is not limited to, CNN convolutional neural network, VGG neural network, and the like. The features of the image can be represented by feature vectors, and therefore, the output result of the machine learning model is the feature vectors of the image. For example, an image 1 of an object 1 may be input into a trained machine learning model to output a feature vector of the image 1 as a vector a. Through the machine learning model, more accurate characteristic vectors can be obtained, and therefore more accurate recognition results can be obtained. It should be added that the facial features of the target object in the database may also be extracted, for example, the feature vector of the facial features of the target object 0 may be the vector M, and both may be extracted by the same algorithm or model to ensure accuracy.
In step S330, the first type features of the plurality of objects are matched with the first type features of the target object to determine a matching result.
In the present exemplary embodiment, the matching result may include both the matching success and the matching failure. Referring to fig. 4, a specific implementation manner of step S330 may include step S410 and step S420, where:
in step S410, obtaining similarities between the first type features of the plurality of objects and the first type features of the target object;
in step S420, if the similarity satisfies a threshold condition, it is determined that the matching result is a successful matching.
In this exemplary embodiment, a similarity between the feature vector corresponding to each acquired image and the feature vector corresponding to the target object in the database may be calculated. Specifically, the feature distance between the feature vector of the facial feature corresponding to each image and the feature vector representing the facial feature of the target object may be calculated according to a distance calculation formula, and the similarity between the facial feature of each image and the facial feature of the target object may be determined according to the feature distance. The characteristic distance may include, but is not limited to, a euclidean distance, a cosine distance, a mahalanobis distance, etc., and the euclidean distance is used as an example for illustration. Specifically, the similarity is inversely related to the euclidean distance, i.e., the greater the euclidean distance, the smaller the similarity; the smaller the euclidean distance, the greater the similarity. In order to realize accurate screening of the plurality of objects, a threshold condition for indicating whether the plurality of objects are similar to the target object may be provided in advance, and the threshold condition may be that the similarity is greater than or equal to a similarity threshold set in advance. For improved accuracy, the similarity threshold may be a larger value, such as 0.8 or 0.9, etc. When the similarity between the feature vector of the image of one of the objects and the feature vector of the target object is greater than or equal to 0.8, the matching of the two can be considered successful. When the similarity between the feature vector of the image and the feature vector of the target object is less than 0.8, it can be considered that the matching between the two fails. For example, if the similarity between the feature vector a of the object 1 and the feature vector M of the facial feature of the target object 0 is 0.9, the object 1 is considered to be similar to the target object 0. By this method, it is possible to determine whether or not a plurality of objects are similar to the target object. If a certain object is similar to the target object, the matching result is successful; and if the certain object is not similar to the target object, the matching result is matching failure.
In step S340, the object corresponding to the first type feature whose matching result is successful is taken as the object to be selected.
In this exemplary embodiment, on the basis of step S330, feature vectors similar to feature vectors of facial features of the target object may be screened out, and then a corresponding image and an object corresponding to the image are obtained according to the feature vectors, and the objects are used as the objects to be selected which are roughly selected. For example, if the feature vector of the object 1 is similar to the feature vector of the target object 0, the object 1 and the target object 0 are considered to be successfully matched, and at this time, the object 1 may be used as the candidate object.
According to the technical scheme in fig. 3, a plurality of objects in the field of view of the augmented reality device can be screened according to the comparison of the first type of features, and an object which is similar to the target object is obtained and used as a candidate object, so that selection can be continued from the candidate object, and the accuracy of object identification and screening is improved.
It is to be added that, after the candidate object is determined, in order to improve the concentration degree of identifying and searching the target object, the obtained candidate object may be locked, and prompt information for prompting the acquisition of the candidate object is provided. The locking operation may be a marking operation, such as marking by a special symbol, or the like. Through locking operation, the range of subsequent searching can be reduced, interference and calculation amount are reduced, convenience is improved, and the result is more accurate. The prompt message may be, for example, a text message or a sound message, and the text message may be displayed in the real scene of the user through the augmented reality device.
With reference to fig. 2, in step S220, the second type features of the candidate object are compared with the second type features of the target object to obtain a comparison result.
In the exemplary embodiment, the second type of feature is other types of information different from the first type of feature, and may include, but is not limited to, a photo of the subject to be selected and the target subject, a physical feature, a history, identity information, age, and other detailed information. The target object includes but is not limited to a suspect, a missing person or other person needing to be searched, and the like.
Specifically, the second type feature of each object to be selected may be compared with the second type feature of the target object in the database one by one, so as to determine whether the second type features are consistent with each other. Wherein, whether the photos and the body features of the two are consistent can be judged by methods such as a machine learning model and the like. Meanwhile, the detailed information of the object to be selected can be called through the certificate number of the object to be selected so as to judge whether the detailed information of the object to be selected is the same as the detailed information of the target object.
As shown in fig. 2, in step S230, if it is determined that the object to be selected is the target object according to the comparison result, the information of the target object is sent to the terminal, and a shortcut operation is executed.
In this exemplary embodiment, on the basis of step S220, if all the information in the second type feature of the candidate object is consistent with all the information in the second type feature of the target object, it is determined that the candidate object is the target object. And if at least one piece of information in the second type characteristics of the object to be selected is inconsistent with the information of the second type characteristics of the target object, determining that the object to be selected is not the target object. For example, if the facial features of the object 1 and the target object 0 are similar, the object 1 may be used as a candidate object; object 1 is consistent with the second type feature of target object 0, then object 1 is the target object 0 to be found. If the facial features of the object 2 and the target object 0 are similar, the object 2 can be used as a candidate object; object 2 is not consistent with the second type feature of target object 0, then object 2 is not target object 0 to be sought. It should be noted that there is only one target object that ultimately matches a certain object. Based on this, an accurate recognition result can be obtained.
On the basis, after the target object to be searched is determined from the plurality of objects in the visual field range of the augmented reality device according to the first type feature and the second type feature, the information of the target object can be sent to the terminal. The terminal here may be, for example, a terminal that uploads information of a target object, such as a police terminal or a family terminal.
Wherein sending the information of the target object to a terminal comprises: and sending the position information of the target object and the view range corresponding to the target object to the terminal. Specifically, after the target object is determined, the augmented reality device may locate the target object in real time through a GPS (Global Positioning System) module provided thereon to obtain location information of the target object. Further, the position information of the target object and the information within the visual field range in which the target object is located may be transmitted to the terminal. The information within the field of view of the target object may be, for example, detailed information of the surrounding environment. By transmitting the position information and the information within the visual field range to the terminal, the police can be assisted in processing.
In the context of the present exemplary embodiment, the shortcut operations may include, but are not limited to, a shortcut alert operation and/or a shortcut notification operation. The shortcut alarm operation refers to a one-key alarm operation, such as sending alarm information to a police or making a call. The shortcut notification operation refers to transmitting notification information to a family terminal of a target object. And when the target object is a suspect, triggering the quick alarm operation. And when the target object is a missing person, triggering a quick alarm operation and/or a quick notification operation. Specifically, the shortcut operation may be performed in response to a trigger operation to the augmented reality device. The trigger operation can be the pressing operation of an alarm key and a notification key; different gesture operations can be performed, for example, the gesture for triggering the shortcut alert operation is a hand shaking, the gesture for triggering the shortcut notification operation is a fist making, and the like, as long as the gestures are different. The triggering operation may also be other suitable operations, and is not particularly limited herein. After the target object is determined, the shortcut operation is executed in response to the trigger operation, so that the problem that a user needs to make a call manually or operate a terminal when giving an alarm or informing is avoided, time consumption is reduced, and convenience is improved. In addition, the whole identification and confirmation process and the quick operation process of the target object are very hidden and quick, and the user cannot be exposed, so that the safety of the user can be protected.
In the present exemplary embodiment, an augmented reality-based target object recognition apparatus is also provided, and referring to fig. 4, the apparatus 400 mainly includes the following modules:
the object-to-be-selected determining module 401 is configured to acquire first type features of a plurality of objects in a field of view through an augmented reality device, and determine an object to be selected according to the first type features;
a feature comparison module 402, configured to compare a second type feature of the to-be-selected object with a second type feature of the target object to obtain a comparison result;
and a target determining module 403, configured to send information of the target object to a terminal and execute a shortcut operation if it is determined that the object to be selected is the target object according to the comparison result.
In an exemplary embodiment of the present disclosure, the candidate object determination module includes: an image acquisition module, configured to acquire, through a camera in the augmented reality device, images of the plurality of objects within the field of view of the augmented reality device; a first feature extraction module to extract the first type of feature from the image, the first type of feature comprising a facial feature; a feature matching module for matching the first type features of the plurality of objects with the first type features of the target object to determine a matching result; and the object determining module is used for taking the object corresponding to the first type feature which is successfully matched as the object to be selected as the matching result.
In an exemplary embodiment of the present disclosure, the feature matching module includes: a similarity obtaining module, configured to obtain similarities between the first type features of the multiple objects and the first type features of the target object; and the matching control module is used for determining that the matching result is successful if the similarity meets a threshold condition.
In an exemplary embodiment of the present disclosure, when determining the candidate object, the apparatus further includes: and the prompt module is used for locking the object to be selected and providing prompt information for reminding the object to be selected.
In an exemplary embodiment of the present disclosure, the goal determining module includes: the first determining module is used for determining the object to be selected as the target object if the second type characteristics of the object to be selected are consistent with the second type characteristics of the target object; and the second determining module is used for determining that the object to be selected does not belong to the target object if the second type characteristics of the object to be selected are inconsistent with the second type characteristics of the target object.
In an exemplary embodiment of the present disclosure, the goal determining module includes: and the information sending module is used for sending the position information of the target object and the information of the visual field range corresponding to the target object to the terminal.
In an exemplary embodiment of the present disclosure, the targeting module includes: and the shortcut operation control module is used for responding to the trigger operation of the augmented reality equipment and executing the shortcut operation, and the shortcut operation comprises a shortcut alarm operation and/or a shortcut notification operation.
It should be noted that, specific details of each module in the augmented reality-based target object identification apparatus have been described in detail in the corresponding augmented reality-based target object identification method, and therefore are not described herein again.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that these steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
In an exemplary embodiment of the present disclosure, an electronic device capable of implementing the above method is also provided.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or program product. Thus, various aspects of the invention may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 600 according to this embodiment of the invention is described below with reference to fig. 6. The electronic device 600 shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present invention.
As shown in fig. 6, the electronic device 600 is in the form of a general purpose computing device. The components of the electronic device 600 may include, but are not limited to: the at least one processing unit 610, the at least one memory unit 620, and a bus 650 that couples the various system components including the memory unit 620 and the processing unit 610.
Wherein the storage unit stores program code that is executable by the processing unit 610 to cause the processing unit 610 to perform steps according to various exemplary embodiments of the present invention as described in the above section "exemplary methods" of the present specification. For example, the processing unit 610 may perform the steps as shown in fig. 2.
The storage unit 620 may include readable media in the form of volatile memory units, such as a random access memory unit (RAM) 6201 and/or a cache memory unit 6202, and may further include a read-only memory unit (ROM) 6203.
The memory unit 620 may also include a program/utility 6204 having a set (at least one) of program modules 6205, such program modules 6205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 630 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, a graphics acceleration interface, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 600 may also communicate with one or more external devices 700 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 600, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 600 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interface 650. Also, the electronic device 600 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 660. As shown, the network adapter 660 communicates with the other modules of the electronic device 600 over the bus 630. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 600, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In an exemplary embodiment of the present disclosure, there is also provided a computer-readable storage medium having stored thereon a program product capable of implementing the above-described method of the present specification. In some possible embodiments, the various aspects of the invention may also be implemented in the form of a program product comprising program code means for causing a terminal device to carry out the steps according to various exemplary embodiments of the invention described in the above section "exemplary method" of this description, when said program product is run on said terminal device.
According to the program product for realizing the method, the portable compact disc read only memory (CD-ROM) can be adopted, the program code is included, and the program product can be operated on terminal equipment, such as a personal computer. However, the program product of the present invention is not limited in this regard and, in the present document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Furthermore, the above-described figures are merely schematic illustrations of processes involved in methods according to exemplary embodiments of the invention, and are not intended to be limiting. It will be readily appreciated that the processes illustrated in the above figures are not intended to indicate or limit the temporal order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (8)

1. A target object identification method based on augmented reality is characterized by comprising the following steps:
acquiring images of a plurality of objects in a visual field range of augmented reality equipment through a camera in the augmented reality equipment, extracting first type features from the images, matching the first type features of the plurality of objects with the first type features of the target object to determine a matching result, and taking an object corresponding to the first type features, which is successfully matched, as an object to be selected; the first type of feature comprises a facial feature or an ocular feature;
comparing the second type characteristic of the object to be selected with the second type characteristic of the target object to obtain a comparison result; the second type of feature is other type of information than the first type of feature;
if the object to be selected is determined to be the target object according to the comparison result, sending the information of the target object to a terminal, and executing shortcut operation;
wherein matching the first type features of the plurality of objects with the first type features of the target object to determine a matching result comprises:
obtaining similarities between the first type features of the plurality of objects and the first type features of the target object;
and if the similarity meets a threshold condition, determining that the matching result is successful.
2. The augmented reality-based target object recognition method of claim 1, wherein in determining the object to be selected, the method further comprises:
and locking the object to be selected, and providing prompt information for prompting the object to be selected.
3. The augmented reality-based target object recognition method of claim 1, wherein determining the object to be selected as the target object according to the comparison result comprises:
if the second type characteristics of the object to be selected are consistent with the second type characteristics of the target object, determining that the object to be selected is the target object;
and if the second type characteristics of the object to be selected are not consistent with the second type characteristics of the target object, determining that the object to be selected does not belong to the target object.
4. The augmented reality-based target object recognition method of claim 1, wherein sending the information of the target object to a terminal comprises:
and sending the position information of the target object and the information of the view range corresponding to the target object to the terminal.
5. The augmented reality-based target object recognition method of claim 1, wherein performing a shortcut operation comprises:
and responding to the trigger operation of the augmented reality equipment, and executing the shortcut operation, wherein the shortcut operation comprises a shortcut alarm operation and/or a shortcut notification operation.
6. An augmented reality-based target object recognition apparatus, comprising:
the device comprises a candidate object determining module, a candidate object determining module and a target object determining module, wherein the candidate object determining module is used for acquiring images of a plurality of objects in a visual field range of augmented reality equipment through a camera in the augmented reality equipment, extracting first type features from the images, matching the first type features of the plurality of objects with the first type features of the target object to determine a matching result, and taking the object corresponding to the first type features which are successfully matched as a candidate object; the first type of feature comprises a facial feature or an ocular feature;
the characteristic comparison module is used for comparing the second type characteristic of the object to be selected with the second type characteristic of the target object to obtain a comparison result; the second type of feature is other type of information than the first type of feature;
the target determining module is used for sending the information of the target object to a terminal and executing shortcut operation if the object to be selected is determined to be the target object according to the comparison result;
wherein matching the first type features of the plurality of objects with the first type features of the target object to determine a match comprises:
obtaining similarities between the first type features of the plurality of objects and the first type features of the target object;
and if the similarity meets a threshold condition, determining that the matching result is successful.
7. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the augmented reality based target object recognition method of any one of claims 1-5 via execution of the executable instructions.
8. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method for augmented reality based target object recognition according to any one of claims 1 to 5.
CN201910678874.5A 2019-07-25 2019-07-25 Target object identification method, device, equipment and medium based on augmented reality Active CN110458052B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910678874.5A CN110458052B (en) 2019-07-25 2019-07-25 Target object identification method, device, equipment and medium based on augmented reality

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910678874.5A CN110458052B (en) 2019-07-25 2019-07-25 Target object identification method, device, equipment and medium based on augmented reality

Publications (2)

Publication Number Publication Date
CN110458052A CN110458052A (en) 2019-11-15
CN110458052B true CN110458052B (en) 2023-04-07

Family

ID=68483562

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910678874.5A Active CN110458052B (en) 2019-07-25 2019-07-25 Target object identification method, device, equipment and medium based on augmented reality

Country Status (1)

Country Link
CN (1) CN110458052B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111104927B (en) * 2019-12-31 2024-03-22 维沃移动通信有限公司 Information acquisition method of target person and electronic equipment
CN112949488B (en) * 2021-03-01 2023-09-01 北京京东振世信息技术有限公司 Picture information processing method and device, computer storage medium and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018119599A1 (en) * 2016-12-26 2018-07-05 深圳前海达闼云端智能科技有限公司 Method and device for searching for person and communication system
CN108776787A (en) * 2018-06-04 2018-11-09 北京京东金融科技控股有限公司 Image processing method and device, electronic equipment, storage medium
CN109190601A (en) * 2018-10-19 2019-01-11 银河水滴科技(北京)有限公司 Recongnition of objects method and device under a kind of monitoring scene
CN109508524A (en) * 2018-11-14 2019-03-22 李泠瑶 Authentication method, system and storage medium
CN109800737A (en) * 2019-02-02 2019-05-24 深圳市商汤科技有限公司 Face recognition method and device, electronic equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10528816B2 (en) * 2017-11-30 2020-01-07 Salesforce.Com, Inc. System and method for retrieving and displaying supplemental information and pertinent data using augmented reality

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018119599A1 (en) * 2016-12-26 2018-07-05 深圳前海达闼云端智能科技有限公司 Method and device for searching for person and communication system
CN108776787A (en) * 2018-06-04 2018-11-09 北京京东金融科技控股有限公司 Image processing method and device, electronic equipment, storage medium
CN109190601A (en) * 2018-10-19 2019-01-11 银河水滴科技(北京)有限公司 Recongnition of objects method and device under a kind of monitoring scene
CN109508524A (en) * 2018-11-14 2019-03-22 李泠瑶 Authentication method, system and storage medium
CN109800737A (en) * 2019-02-02 2019-05-24 深圳市商汤科技有限公司 Face recognition method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
结合图像识别的移动增强现实系统设计与应用;严雷等;《中国图象图形学报》;20160216(第02期);全文 *

Also Published As

Publication number Publication date
CN110458052A (en) 2019-11-15

Similar Documents

Publication Publication Date Title
CN108776787B (en) Image processing method and device, electronic device and storage medium
US10685245B2 (en) Method and apparatus of obtaining obstacle information, device and computer storage medium
US11501514B2 (en) Universal object recognition
EP4131030A1 (en) Method and apparatus for searching for target
US10930010B2 (en) Method and apparatus for detecting living body, system, electronic device, and storage medium
US11875683B1 (en) Facial recognition technology for improving motor carrier regulatory compliance
US20180068173A1 (en) Identity verification via validated facial recognition and graph database
EP3584745A1 (en) Live body detection method and apparatus, system, electronic device, and storage medium
JP6986187B2 (en) Person identification methods, devices, electronic devices, storage media, and programs
CN111259751A (en) Video-based human behavior recognition method, device, equipment and storage medium
KR20200048201A (en) Electronic device and Method for controlling the electronic device thereof
JP7106742B2 (en) Face recognition method, device, electronic device and computer non-volatile readable storage medium
CN111914775B (en) Living body detection method, living body detection device, electronic equipment and storage medium
CN111914812A (en) Image processing model training method, device, equipment and storage medium
CN110458052B (en) Target object identification method, device, equipment and medium based on augmented reality
CN115699096B (en) Tracking augmented reality devices
JP2022003526A (en) Information processor, detection system, method for processing information, and program
CN113657398A (en) Image recognition method and device
CN113031813A (en) Instruction information acquisition method and device, readable storage medium and electronic equipment
CN116912478A (en) Object detection model construction, image classification method and electronic equipment
CN113792569B (en) Object recognition method, device, electronic equipment and readable medium
CN107241548B (en) Cursor control method, cursor control device, terminal and storage medium
US11501504B2 (en) Method and apparatus for augmented reality
CN113989562A (en) Model training and image classification method and device
CN112115740A (en) Method and apparatus for processing image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant