CN106815568A - For the method and system being identified for destination object - Google Patents

For the method and system being identified for destination object Download PDF

Info

Publication number
CN106815568A
CN106815568A CN201611258394.6A CN201611258394A CN106815568A CN 106815568 A CN106815568 A CN 106815568A CN 201611258394 A CN201611258394 A CN 201611258394A CN 106815568 A CN106815568 A CN 106815568A
Authority
CN
China
Prior art keywords
identification information
face
information
basic identification
standard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611258394.6A
Other languages
Chinese (zh)
Inventor
赵国成
余念伦
张凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ewatt Technology Co Ltd
Original Assignee
Ewatt Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ewatt Technology Co Ltd filed Critical Ewatt Technology Co Ltd
Priority to CN201611258394.6A priority Critical patent/CN106815568A/en
Publication of CN106815568A publication Critical patent/CN106815568A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The present invention discloses a kind of method and system for being identified for destination object, and the method includes:Obtain the first face basis identification information of destination object;First face basis identification information is contrasted with face standard logo information;If the first face basis identification information and face's standard logo information match, send lock instruction lock onto target object;If mismatching, judge whether the first face basis identification information is destination object by face's basis identification information for being obtained after lift face;If so, then reducing the second face basis identification information before basis identification information to the destination object lift face of the first face;Second face basis identification information is contrasted with face standard logo information;If matching, send lock instruction and lock the destination object.The method and system that the application is provided solves unmanned plane of the prior art during being identified to destination object, there is the inaccurate technical problem of identification information.

Description

Method and system for identifying target object
Technical Field
The invention relates to the technical field of unmanned aerial vehicles, in particular to a method and a system for identifying a target object.
Background
Unmanned aerial vehicle (english: airfft) refers to a machine which obtains aerodynamic lift-off flight through the relative movement of a fuselage and air. Including rotorcraft, helicopters, fixed wings, and the like.
At present, a trend is gradually formed by technologies of synchronously acquiring information, identifying and tracking a target and the like in the air flight process of an unmanned aerial vehicle.
However, with the continuous development of the shaping technology and the rapid spread of the shaping trend in recent years, most people, especially purposeful related people such as criminals, information espionage, etc., usually escape from the image information acquisition and recognition of the camera recognition system of the unmanned aerial vehicle through face shaping, so that the unmanned aerial vehicle cannot accurately acquire and recognize the identification information of the target object, and then a great obstacle is caused to the technology of recognizing and tracking the target object through the unmanned aerial vehicle.
Therefore, the unmanned aerial vehicle in the prior art has the technical problem that identification information is inaccurate in the process of identifying the target object.
Disclosure of Invention
The invention provides a method and a system for identifying a target object, which are used for solving the technical problem that identification information is inaccurate in the process of identifying the target object by an unmanned aerial vehicle in the prior art.
In a first aspect, an embodiment of the present application provides a method for identifying a target object, which is applied to an unmanned aerial vehicle, and the method includes: acquiring first prejudgment position information of a target object; acquiring first face basic identification information of the target object according to the first pre-judging position information; comparing the first face basic identification information with face standard identification information in a pre-constructed standard database; if the first face basic identification information is matched with the face standard identification information, sending a locking instruction to lock the target object; if the first face basic identification information is not matched with the face standard identification information, judging whether the first face basic identification information is the face basic identification information obtained after the face-lifting of the target object; if so, restoring the first face basic identification information to second face basic identification information of the target object before face-lifting; comparing the second face basic identification information with the face standard identification information; if the second face basic identification information is matched with the face standard identification information, sending a locking instruction to lock the target object; and if the second facial basic identification information is not matched with the facial standard identification information, acquiring second pre-judging position information of the target object.
Optionally, the comparing the first face basic identification information with the face standard identification information in the pre-constructed standard database includes: performing priority classification on the first face basic identification information, wherein the classified first face basic identification information comprises first priority matching information, second priority matching information and third priority matching information; comparing the first face basic identification information with the face standard identification information according to a priority order; the priority order includes: an order from the first priority matching information, the second priority matching information to the third priority matching information; if the first face basic identification information matches the face standard identification information, the method includes: if the first priority matching information is matched with the face standard identification information, judging that the first face basic identification information is matched with the face standard identification information; or, if the second priority matching information and the third priority matching information are both matched with the face standard identification information, it is determined that the first face basic identification information is matched with the face standard identification information.
Optionally, the first face basic identification information includes: first cheekbone basic identification information, first eye basic identification information, first nose basic identification information, first eyebrow basic identification information, first ear basic identification information and first tooth identification information; wherein the first cheekbone basic identification information is first priority matching information; the first eye basic identification information, the first nose basic identification information and the first eyebrow basic identification information are second priority matching information; the first ear basic identification information and the first tooth identification information are third priority matching information; the face standard identification information includes: cheekbone standard identification information, eye standard identification information, nose standard identification information, eyebrow standard identification information, ear standard identification information, and tooth standard identification information.
Optionally, the determining whether the first face basic identification information is the face basic identification information obtained after the face-lifting of the target object includes: constructing an adjustment range area of the standard identification information; acquiring comparison difference information of the first face basic identification information relative to the face standard identification information; judging whether the comparison difference information is between the adjustment range areas; if so, the first face basic identification information is the face basic identification information obtained after the target object is subjected to face-lifting.
Optionally, the restoring the first face basic identification information to the second face basic identification information of the target object before face-lifting includes: correcting the first face basic identification information according to the comparison difference information; and determining the corrected face basic identification information as the second face basic identification information.
In a second aspect, an embodiment of the present application provides a system for identifying a target object, which is applied to an unmanned aerial vehicle, and the system includes: the first position module is used for obtaining first prejudgment position information of the target object; the acquisition module is used for acquiring first face basic identification information of the target object according to the first pre-judging position information; the first comparison module is used for comparing the first face basic identification information with face standard identification information in a standard database which is constructed in advance; the first locking module is used for sending a locking instruction to lock the target object if the first face basic identification information is matched with the face standard identification information; the judging module is used for judging whether the first face basic identification information is the face basic identification information obtained after the face of the target object is rectified if the first face basic identification information is not matched with the face standard identification information; the restoring module is used for restoring the first face basic identification information to second face basic identification information of the target object before face-lifting if the first face basic identification information is the second face basic identification information; the second comparison module is used for comparing the second face basic identification information with the face standard identification information; the second locking module is used for sending a locking instruction to lock the target object if the second face basic identification information is matched with the face standard identification information; and the second position module is used for acquiring second pre-judging position information of the target object if the second face basic identification information is not matched with the face standard identification information.
Optionally, the first comparing module includes: the first comparison submodule is used for carrying out priority classification on the first face basic identification information, and the classified first face basic identification information comprises first priority matching information, second priority matching information and third priority matching information; the second comparison sub-module is used for comparing the first face basic identification information with the face standard identification information according to the priority order; the priority order includes: an order from the first priority matching information, the second priority matching information to the third priority matching information; the first locking module is further configured to: if the first priority matching information is matched with the face standard identification information, judging that the first face basic identification information is matched with the face standard identification information; or, if the second priority matching information and the third priority matching information are both matched with the face standard identification information, it is determined that the first face basic identification information is matched with the face standard identification information.
Optionally, the first face basic identification information includes: first cheekbone basic identification information, first eye basic identification information, first nose basic identification information, first eyebrow basic identification information, first ear basic identification information and first tooth identification information; wherein the first cheekbone basic identification information is first priority matching information; the first eye basic identification information, the first nose basic identification information and the first eyebrow basic identification information are second priority matching information; the first ear basic identification information and the first tooth identification information are third priority matching information; the face standard identification information includes: cheekbone standard identification information, eye standard identification information, nose standard identification information, eyebrow standard identification information, ear standard identification information, and tooth standard identification information.
Optionally, the determining module further includes: the first judgment submodule is used for constructing an adjustment range area of the standard identification information; the second judgment submodule is used for acquiring comparison difference information of the first face basic identification information relative to the face standard identification information; a third judging submodule, configured to judge whether the comparison difference information is between the adjustment range regions; and the fourth judgment sub-module is used for judging whether the first face basic identification information is the face basic identification information obtained after the face-lifting of the target object.
Optionally, the reduction module includes: the first reduction sub-module is used for correcting the first face basic identification information according to the comparison difference information; and the second restoring submodule is used for determining the modified face basic identification information as the second face basic identification information.
When the method and the system provided by the embodiment of the application are applied to the unmanned aerial vehicle, after the first face basic identification information is judged to be not matched with the face standard identification information, whether the first face basic identification information is the face basic identification information obtained after the face of the target object is finished or not is judged, and if so, the first face basic identification information is restored to the second face basic identification information before the face of the target object is finished; then, the restored second face basic identification information is compared with the face standard identification information, so that the accuracy of the face basic identification information participating in information comparison is effectively ensured, and the technical problem that the obtained identification information is inaccurate when the unmanned aerial vehicle identifies the target object in the prior art is solved.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of a method for identifying an object in an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating a comparison process between basic identification information and standard identification information in an embodiment of the present application;
FIG. 3 is a flowchart of a method for determining whether a target object is shaped according to an embodiment of the present disclosure;
fig. 4 is a schematic block diagram of a system for identifying an object in an embodiment of the present application.
Detailed Description
The embodiment of the application provides a method and a system for identifying a target object, which are used for solving the technical problem that identification information is inaccurate when an unmanned aerial vehicle identifies the target object in the prior art, and achieve the technical effect of high identification precision.
The technical scheme in the embodiment of the application has the following general idea:
when the unmanned aerial vehicle identifies a target object, first obtaining first face basic identification information of the target object; comparing the first face basic identification information with face standard identification information in a pre-constructed standard database; if the first face basic identification information is matched with the face standard identification information, sending a locking instruction to lock the target object; if the first face basic identification information is not matched with the face standard identification information, judging whether the first face basic identification information is the face basic identification information obtained after the face-lifting of the target object; if so, restoring the first face basic identification information to second face basic identification information of the target object before face-lifting; comparing the second face basic identification information with the face standard identification information; and if the second face basic identification information is matched with the face standard identification information, sending a locking instruction to lock the target object.
The method adds a judging step of judging whether the first face basic identification information is the face basic identification information obtained after the face-lifting of the target object, so that when the target object is identified by the unmanned aerial vehicle, when the detected first face basic identification information is not matched with the face basic identification information in the standard database, whether the first face basic identification information is inaccurate information obtained after the face-lifting is further judged, and the technical defect that the identification information of the target object cannot be accurately obtained and identified by the unmanned aerial vehicle due to the fact that the identification information of the target object cannot be accurately obtained and identified by the unmanned aerial vehicle when the identified target escapes image information acquisition and identification of an unmanned aerial vehicle camera shooting identification system through the face-lifting method in the process of identifying the target object by the unmanned aerial vehicle is solved.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
Example one
The present embodiment provides a method and a system for identifying a target object, please refer to fig. 1, where the method includes:
step S100, acquiring first face basic identification information of a target object;
step S200, comparing the first face basic identification information with face standard identification information in a standard database constructed in advance;
step S300a, if the first face basic identification information matches the face standard identification information, sending a locking instruction to lock the target object;
step S300b, if the first face basic identification information does not match the face standard identification information, determining whether the first face basic identification information is the face basic identification information obtained after the face-lifting of the target object;
step S400, if yes, restoring the first face basic identification information to second face basic identification information of the target object before face-lifting;
step S500, comparing the second face basic identification information with the face standard identification information;
step S600, if the second face basic identification information is matched with the face standard identification information, a locking instruction is sent to lock the target object.
For example, the method may be applied to drones, but does not mean that the method is only applicable to drones. In other words, the method provided by the embodiment can also be used for other electronic systems with transportation functions, such as: a man-machine with an electronic information communication function, an automobile with an electronic information communication function, a motorcycle with an electronic information communication function, a ship with an electronic information communication function, and the like.
Specifically, with the continuous development of the shaping technology and the rapid spread of the shaping trend in recent years, most people, especially purposeful related people such as criminals, intelligence espionage, etc., usually escape from the image information acquisition and identification of the unmanned aerial vehicle camera recognition system through face shaping. According to the research of the inventor, once the target object modifies and adjusts partial parameter information of the face in a shaping mode, such as cheekbone information, eye information, nose information, eyebrow information, ear information, tooth information and the like, the face information of the target object acquired by the unmanned aerial vehicle camera shooting recognition system cannot be compared and matched with information in a standard database, so that the unmanned aerial vehicle cannot accurately acquire and recognize identification information of the target object, and a great obstacle is caused to a technology for recognizing and tracking the target object through the unmanned aerial vehicle. The method provided by the invention has the beneficial effects that when the detected first face basic identification information is not matched with the face standard identification information in the standard database, whether the detected first face basic identification information is inaccurate information acquired after face-lifting is further judged, and the identification precision is high.
The method for identifying a target object provided by the present application is described in detail below with reference to fig. 1:
firstly, executing step S100 to obtain first face basic identification information of a target object;
specifically, the first face basic identification information of the target object may be acquired by a camera system fixed at the bottom of the unmanned aerial vehicle body and having a camera function, and the camera system may include a camera. When the camera shoots a predetermined target object, light reflected by the target object is collected by a camera lens, so that the light is focused on a shooting surface (for example, shooting and amplifying), then the light is processed and adjusted by an internal circuit of the camera, finally obtained standard signals are recorded on a recording medium, and meanwhile, the standard signals are transmitted to a ground terminal such as a ground station through a data transmission or image transmission channel in the unmanned aerial vehicle to be displayed.
Of course, since the target object may be a static object or a dynamic object during the photographing process. The following is a description of the two cases:
in the first situation, when the target object is a static object, the camera of the camera locks the target object for information acquisition, and if the camera position of the camera does not need to be adjusted at this time, the ground station terminal can control the unmanned aerial vehicle to hover in the air, so that the target object can be acquired in the direction. When the acquisition direction of the unmanned aerial vehicle relative to the target object is adjusted and the rotation angle of the camera needs to be controlled, the unmanned aerial vehicle can be controlled by the ground station terminal to perform corresponding flight adjustment actions in the air, such as forward flight, backward flight, leftward flight, rightward flight and the like, so that the angle adjustment of the camera relative to the target object is completed. In addition, in the process, besides controlling the flight action of the unmanned aerial vehicle and realizing the angle adjustment of the camera relative to the target object, the embodiment of the application can also rotate and swing through controlling the camera on the holder, so that the camera can adjust the acquisition angle relative to the target object, and at the moment, the unmanned aerial vehicle can still be in a hovering state in the whole process.
In the second situation, when the target object is a dynamic object, the camera of the camera locks the target object for information acquisition, and at this time, because the target object is a dynamic motion process, in order to enable the camera of the camera to lock the target object for identification, the whole unmanned aerial vehicle and the camera system are required to be a dynamic motion process, and only then, the camera of the camera can be ensured to capture the image/video information of the target object in real time. Therefore, in this case, in the moving process of the target object, the flight state of the unmanned aerial vehicle is synchronously controlled through the ground station terminal, and the camera on the pan-tilt is controlled through the ground station terminal to rotate and swing, so that the image/video information of the target object can be captured in real time through the synchronous motion of the unmanned aerial vehicle and the camera.
It should be noted that the above two cases are not two fixed cases in the embodiment of the present application. In other words, whether the target object moves or not is uncertain, and at time t1, the target object is static at position a, and at time t2, the target object may still be at position a, but it may also be moved from position a to position B. At this time, the target object is switched from the prior static process to the subsequent dynamic process. Furthermore, the control states of the unmanned aerial vehicle and/or the camera are also required to be switched according to the method, and the state information of the corresponding target object is matched. Finally, the technical effect of capturing the image/video information of the target object in real time is achieved.
In addition, before step S100 is executed, that is, before the first face basic identification information of the target object is acquired, the first prejudgment position information of the target object may be additionally acquired.
In other words, when the drone is required to capture the first face basic identification information of the target object through the camera system, the accurate position or position area of the target object is not known. At this time, in order to avoid the situation that the unmanned aerial vehicle searches for a single object in the vast people sea, the comparison time is saved, and the comparison efficiency is improved. Before step S100 is executed, the first pre-determined position information of the target object may be obtained in advance, where the first pre-determined position information may be an accurate area where the target object may appear, provided by an official or a medium, or an area where the target object often appears, and thus after the first pre-determined position information is obtained, the first facial basic identification information of the target object may be obtained in the position area in a targeted manner. The comparison time is greatly shortened, and the comparison efficiency is improved.
Next, after acquiring the first face basic identification information of the target object, executing step S200, comparing the first face basic identification information with face standard identification information in a standard database constructed in advance;
in the embodiment of the present application, the pre-constructed standard database may be the collected "true information" for the target object, that is, the "true information" may be understood as the information truly identifying the identity of the target object. The information can be actually recorded record information such as criminal suspects or information espionage provided by public security institutions, or household locations thereof, or authorities such as medical institutions and the like.
It should be added that, in the embodiment of the present application, there are many ways of performing the update continuously, and the embodiment of the present application mainly introduces the following two ways:
firstly, substitution updating, namely replacing the updated data with the prior data, wherein the updating mode can achieve the beneficial effects of ensuring the accuracy of data information and reducing the memory occupation on the other hand;
secondly, cumulative updating, namely, the data updated each time is searched for the storage space again for existence, and the updating mode can achieve the technical effect of ensuring the integrity of the data information.
Further, on the premise of the second updating manner, a step of drawing table analysis may be added, that is, after new data information is obtained in each updating, the drawing table may be accumulated to form a state tracking diagram, so as to continuously know the progress curve of the target object, thereby achieving the technical effect of mastering the next action in advance.
Further, referring to fig. 2, comparing the first face basic identification information with the face standard identification information in the pre-constructed standard database may be performed according to the following sub-steps:
step S201, carrying out priority classification on the first face basic identification information, wherein the classified first face basic identification information comprises first priority matching information, second priority matching information and third priority matching information;
step S202, comparing the first face basic identification information with the face standard identification information according to a priority order; the priority order includes: an order from the first priority matching information, the second priority matching information to the third priority matching information;
before describing step S201 and step S202 in detail, in order to better illustrate the present application, the following description is made in detail on the types of the first face basic identification information and the face standard identification information provided in the embodiments of the present application:
in an embodiment of the present application, the first face basic identification information may include: first cheekbone basic identification information, first eye basic identification information, first nose basic identification information, first eyebrow basic identification information, first ear basic identification information and first tooth identification information; wherein the first cheekbone basic identification information is first priority matching information; the first eye basic identification information, the first nose basic identification information and the first eyebrow basic identification information are second priority matching information; the first ear basic identification information and the first tooth identification information are third priority matching information. The face criterion identification information may include: cheekbone standard identification information, eye standard identification information, nose standard identification information, eyebrow standard identification information, ear standard identification information, and tooth standard identification information.
The first cheekbone basic identification information is correspondingly matched with the cheekbone standard identification information; the first eye basic identification information is compared and matched with the eye standard identification information; the first nose basic identification information is compared and matched with the nose standard identification information; the first eyebrow basic identification information is compared and matched with the standard eyebrow identification information; the first ear basic identification information is compared and matched with the ear standard identification information; the first tooth identification information is matched with the tooth standard identification information in a comparison mode.
Specifically, the first cheekbone basic identification information is first priority matching information; the first eye basic identification information, the first nose basic identification information and the first eyebrow basic identification information are second priority matching information; the first ear basic identification information and the first tooth identification information are third priority matching information. Here, it should be noted that the priority levels of the first priority matching information, the second priority matching information, and the third priority matching information are sequentially decreased. That is, the priority of the first priority matching information is greater than the priority of the second priority matching information, and the priority of the second priority matching information is greater than the priority of the third priority matching information.
It should be noted that the cheekbones are used as the identification information of the face, which has a very representative identification meaning, and the difficulty of modifying and adjusting the cheekbones by the target object in the later period is also the greatest, that is, the identification meaning is the highest, so in the embodiment of the present application, the cheekbone basic identification information is used as the first priority matching information with the highest priority level. Because the representative identification meanings of the nose, the brow and the eyes are not obvious and the target object is easy to modify and adjust the nose, the brow and the eyes in the later period, the nose basic identification information, the brow basic identification information and the eye basic identification information are used as second priority matching information with the priority level inferior to that of the cheekbone basic identification information in the embodiment of the application. And the ear and the teeth are extremely easy to be shielded and cannot be collected, if the ear is extremely easy to be shielded by hair, the teeth cannot be captured when the mouth of the target object is not opened, and the like, so that the ear basic identification information and the tooth basic identification information are used as third priority matching information with the lowest priority level.
Therefore, in the matching order described above, the embodiment of the present application performs matching in the order from the first priority matching information, the second priority matching information, to the third priority matching information.
Next, according to the judgment result of the step S200, the step S300a or the step S300b is selected to be executed:
step S300a, if the first face basic identification information matches the face standard identification information, sending a locking instruction to lock the target object;
step S300b, if the first face basic identification information does not match the face standard identification information, determining whether the first face basic identification information is the face basic identification information obtained after the face-lifting of the target object;
in the specific implementation process, as mentioned above, with the continuous development of the shaping technology and the rapid spread of the shaping trend in recent years, most people, especially purposeful related people such as criminals, information espionage, etc., usually escape from the image information acquisition and recognition of the unmanned aerial vehicle camera shooting recognition system through face shaping, and once the target object modifies the parameters for representing the face information through the shaping mode, the face information of the target object acquired through the unmanned aerial vehicle camera shooting recognition system cannot be compared and matched with the "true information" in the standard database, so that the unmanned aerial vehicle cannot accurately acquire and recognize the identification information of the target object, and then a great obstacle is created to the technology for recognizing and tracking the target object through the unmanned aerial vehicle.
Therefore, according to the comparison of step S200, there are two matching cases as described in step S300a and step S300 b.
Referring to step S300a, when the first face basic identification information matches the face standard identification information, it indicates that the target object is an object that no human is to recognize and track, and then a lock instruction is sent to lock the target object for tracking.
In detail, the matching of the first face basic identification information and the face standard identification information may include at least the following two matching cases:
in a first matching situation, if the first priority matching information matches with the face standard identification information, it is determined that the first face basic identification information matches with the face standard identification information;
and in the second matching situation, if the second priority matching information and the third priority matching information are both matched with the face standard identification information, judging that the first face basic identification information is matched with the face standard identification information.
In other words, in the case where the first priority matching information having the highest priority level matches the face standard identification information, it may not be necessary to match whether the second priority matching information and the third priority matching information match the face standard identification information again. In the present application, when the first priority matching information with the highest priority level is not matched with the face standard identification information, it may be determined that the first face basic identification information is matched with the face standard identification information only if both the second priority matching information and the third priority matching information are matched with the face standard identification information. Therefore, the identification precision of the embodiment of the application is improved.
Referring to step S300b, when the first face basic identification information does not match the face standard identification information, it indicates that the target object is not likely to be an object to be tracked by unmanned recognition. In order to more accurately identify and judge the target object so as to improve the identification precision of the present application in the embodiment of the present application, the present application continues to perform the following steps: and judging whether the first face basic identification information is the face basic identification information obtained after the face-lifting of the target object.
Of course, it should be added here that, when step S300b is executed, if the determined first face basic identification information is not the face basic identification information obtained after the target object is subjected to face rectification, a switching instruction is sent to switch to the next target object for recognition. That is, step S100 is executed again to re-identify the next target object.
Here, similarly, when the unmanned aerial vehicle is required to capture the first facial basic identification information of the next target object through the camera system, if the unmanned aerial vehicle camera system performs the identification comparison based on the previous first pre-determined position information, and it is determined that the first pre-determined position information is not matched, it indicates that the first pre-determined position information is wrong. That is, the first pre-determined position information is replaced, and the second pre-determined position information of the target object is obtained through an official or a medium, and the second pre-determined position information is the same as the first pre-determined position information and is an accurate area which represents the possible appearance of the target object or an area in which the target object often appears, so that after the second pre-determined position information is obtained, the first facial basic identification information of the target object can be obtained in the position area in a targeted manner. The comparison time is greatly shortened, and the comparison efficiency is improved.
And if the determined first face basic identification information is the face basic identification information obtained after the face-lifting of the target object, executing the step S400, and if so, restoring the first face basic identification information to the second face basic identification information of the target object before the face-lifting.
In the embodiment of the present application, please continue to refer to fig. 3, the determining whether the first facial basic identification information is the facial basic identification information obtained after the face-lifting of the target object may specifically include the following steps:
step 300b1, constructing an adjustment range area of the standard identification information;
in the step, an adjustment range area of the standard identification information is constructed, and the adjustment range area can calculate or analyze a maximum shaping modification range for different parts of the target object in the shaping process according to historical shaping data, and the range is used as a reference as the adjustment range area in the step.
In other words, there is a cheekbone adjustment range region corresponding to the cheekbone identification information in the embodiment of the present application; an eye adjustment range area corresponding to the eye identification information; the nose part identification information corresponds to a nose part adjustment range area; an eyebrow adjustment range area corresponding to the eyebrow identification information; the ear adjustment range area corresponds to the ear identification information; and a dental part adjusting range area corresponding to the dental part identification information.
Step 300b2, obtaining comparison difference information of the first face basic identification information relative to the face standard identification information;
the "comparison difference information" is a difference between the basic identification information and the standard identification information, for example, after the first cheekbone basic identification information is compared with the cheekbone standard identification information, the parameter value of the first cheekbone basic identification information is the comparison difference information, and when the parameter value of the first cheekbone basic identification information is 0, the first cheekbone basic identification information is completely the same as the cheekbone standard identification information.
Step 300b3, determining whether the comparison difference information is between the adjustment range areas;
specifically, taking the cheekbone identification information as an example, if the parameter value (comparison difference information) of the first cheekbone basic identification information compared with the cheekbone standard identification information is between the cheekbone adjustment range regions, the following steps are performed:
step 300b4, if yes, the first face basic identification information is the face basic identification information obtained after the target object is subjected to face-lifting.
At this time, step 300b4 is executed, and in step 400, the first face basic identification information is restored to the second face basic identification information of the target object before face-lifting, and a specific restoration method in this embodiment of the present application may include: correcting the first face basic identification information according to the comparison difference information; and determining the corrected face basic identification information as the second face basic identification information.
Then executing step S500, comparing the second face basic identification information with the face standard identification information;
and finally, executing the step S600, and if the second face basic identification information is matched with the face standard identification information, sending a locking instruction to lock the target object.
Similarly, the comparison between the second face basic identification information and the face standard identification information in step S500 may also be performed according to the following sub-steps:
a first sub-step of performing priority classification on the second face basic identification information, wherein the classified second face basic identification information comprises first priority matching information, second priority matching information and third priority matching information;
a second sub-step of comparing the second face basic identification information with the face standard identification information in order of priority; the priority order includes: an order from the first priority matching information, the second priority matching information to the third priority matching information;
further, in this embodiment of the present application, the second face basic identification information may also include: second cheekbone basic identification information, second eye basic identification information, second nose basic identification information, second eyebrow basic identification information, second ear basic identification information and second tooth identification information; wherein the second cheekbone basic identification information is first priority matching information; the second eye basic identification information, the second nose basic identification information and the second eyebrow basic identification information are second priority matching information; the second ear base identification information and the second tooth identification information are third priority matching information. The face criterion identification information may include: cheekbone standard identification information, eye standard identification information, nose standard identification information, eyebrow standard identification information, ear standard identification information, and tooth standard identification information.
The second zygomatic bone basic identification information is correspondingly matched with the zygomatic bone standard identification information; the second eye basic identification information is compared and matched with the eye standard identification information; the second nose basic identification information is compared and matched with the nose standard identification information; the second eyebrow basic identification information is compared and matched with the standard eyebrow identification information; the second ear basic identification information is compared and matched with the ear standard identification information; the second dental identification information is matched with the dental standard identification information in a comparison manner.
Specifically, the second zygomatic bone base identification information is first priority matching information; the second eye basic identification information, the second nose basic identification information and the second eyebrow basic identification information are second priority matching information; the second ear base identification information and the second tooth identification information are third priority matching information. Here, it should be noted that the priority levels of the first priority matching information, the second priority matching information, and the third priority matching information are sequentially decreased. That is, the priority of the first priority matching information is greater than the priority of the second priority matching information, and the priority of the second priority matching information is greater than the priority of the third priority matching information.
Here, the comparison and matching manner for the second face basic identification information and the face standard identification information is completely the same as the comparison and matching manner for the first face basic identification information and the face standard identification information, and for the unrefined part of the comparison and matching between the second face basic identification information and the face standard identification information, reference may be made to the comparison and matching manner for the first face basic identification information and the face standard identification information, and details are not repeated here.
In the above manner, when the method provided by the embodiment of the application is applied to an unmanned aerial vehicle, after the first face basic identification information is judged to be not matched with the face standard identification information, whether the first face basic identification information is the face basic identification information obtained after the face-lifting of the target object is judged, and if so, the first face basic identification information is restored to the second face basic identification information before the face-lifting of the target object; then, the restored second face basic identification information is compared with the face standard identification information, so that the accuracy of the face basic identification information participating in information comparison is effectively ensured, and the technical problem that the obtained identification information is inaccurate when the unmanned aerial vehicle identifies the target object in the prior art is solved.
Based on the same inventive concept, the embodiment of the application also provides a system corresponding to the method in the first embodiment, which is shown in the second embodiment.
Example two
The present embodiment provides a system, please refer to fig. 4, the system includes:
an obtaining module 1000, configured to obtain first face basic identification information of a target object;
a first comparison module 2000, configured to compare the first facial basic identification information with facial standard identification information in a standard database that is constructed in advance;
the first locking module 3000a is configured to send a locking instruction to lock the target object if the first face basic identification information matches the face standard identification information;
a judging module 3000b, configured to, if the first face basic identification information does not match the face standard identification information, judge whether the first face basic identification information is the face basic identification information obtained after the face-lifting of the target object;
a restoring module 4000, configured to restore the first face basic identification information to second face basic identification information of the target object before face-lifting if the first face basic identification information is found to be the second face basic identification information;
a second comparison module 5000, configured to compare the second face basic identification information with the face standard identification information;
and the second locking module 6000 is configured to send a locking instruction to lock the target object if the second face basic identification information matches the face standard identification information.
In an embodiment of the present application, the first comparing module further includes:
a first comparison submodule; the first face basic identification information is used for carrying out priority classification, and the classified first face basic identification information comprises first priority matching information, second priority matching information and third priority matching information;
a second comparison sub-module; the face standard identification information is used for comparing the first face basic identification information with the face standard identification information according to a priority order; the priority order includes: an order from the first priority matching information, the second priority matching information to the third priority matching information;
in this embodiment of the application, the first locking module is further configured to determine that the first face basic identification information matches the face standard identification information if the first priority matching information matches the face standard identification information;
or,
and if the second priority matching information and the third priority matching information are both matched with the face standard identification information, judging that the first face basic identification information is matched with the face standard identification information.
In an embodiment of the present application, the first face basic identification information includes: first cheekbone basic identification information, first eye basic identification information, first nose basic identification information, first eyebrow basic identification information, first ear basic identification information and first tooth identification information; wherein the first cheekbone basic identification information is first priority matching information; the first eye basic identification information, the first nose basic identification information and the first eyebrow basic identification information are second priority matching information; the first ear basic identification information and the first tooth identification information are third priority matching information;
the face standard identification information includes: cheekbone standard identification information, eye standard identification information, nose standard identification information, eyebrow standard identification information, ear standard identification information, and tooth standard identification information.
In this embodiment of the application, the determining module 3000b further includes the following sub-modules:
a first judgment submodule; an adjustment range area for constructing the standard identification information;
a second judgment sub-module; the comparison difference information is used for acquiring the comparison difference value information of the first face basic identification information relative to the face standard identification information;
a third judgment sub-module; the comparison difference information is used for judging whether the comparison difference information is between the adjustment range areas or not;
a fourth judgment sub-module; and if so, the first face basic identification information is the face basic identification information obtained after the target object is subjected to face-lifting.
In this embodiment, the reduction module 4000 further includes the following sub-modules:
the first reduction sub-module is used for correcting the first face basic identification information according to the comparison difference information;
and the second restoring submodule is used for determining the modified face basic identification information as the second face basic identification information.
The technical scheme provided in the embodiment of the application at least has the following technical effects or advantages:
when the face basic identification information is applied to the unmanned aerial vehicle, after the first face basic identification information is judged not to be matched with the face standard identification information, whether the first face basic identification information is the face basic identification information obtained after the face of the target object is finished or not is additionally judged, and if so, the first face basic identification information is restored to the second face basic identification information before the face of the target object is finished; then, the restored second face basic identification information is compared with the face standard identification information, so that the accuracy of the face basic identification information participating in information comparison is effectively ensured, and the technical problem that the obtained identification information is inaccurate when the unmanned aerial vehicle identifies the target object in the prior art is solved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, systems (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing system to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing system, create a system for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing system to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including an instruction system which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing system to cause a series of operational steps to be performed on the computer or other programmable system to produce a computer implemented process such that the instructions which execute on the computer or other programmable system provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention. It will be apparent to those skilled in the art that various changes and modifications may be made in the embodiments of the present application without departing from the spirit and scope of the embodiments of the present application. Thus, if such modifications and variations of the embodiments of the present application fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to encompass such modifications and variations.

Claims (10)

1. A method for identifying a target object, applied to a unmanned aerial vehicle, is characterized in that the method comprises:
acquiring first prejudgment position information of a target object;
acquiring first face basic identification information of the target object according to the first pre-judging position information;
comparing the first face basic identification information with face standard identification information in a pre-constructed standard database;
if the first face basic identification information is matched with the face standard identification information, sending a locking instruction to lock the target object;
if the first face basic identification information is not matched with the face standard identification information, judging whether the first face basic identification information is the face basic identification information obtained after the face-lifting of the target object;
if so, restoring the first face basic identification information to second face basic identification information of the target object before face-lifting;
comparing the second face basic identification information with the face standard identification information;
if the second face basic identification information is matched with the face standard identification information, sending a locking instruction to lock the target object;
and if the second facial basic identification information is not matched with the facial standard identification information, acquiring second pre-judging position information of the target object.
2. The method of claim 1, wherein said comparing said first facial basis identification information to facial standard identification information in a pre-built standard database comprises:
performing priority classification on the first face basic identification information, wherein the classified first face basic identification information comprises first priority matching information, second priority matching information and third priority matching information;
comparing the first face basic identification information with the face standard identification information according to a priority order; the priority order includes: an order from the first priority matching information, the second priority matching information to the third priority matching information;
if the first face basic identification information matches the face standard identification information, the method includes:
if the first priority matching information is matched with the face standard identification information, judging that the first face basic identification information is matched with the face standard identification information;
or,
and if the second priority matching information and the third priority matching information are both matched with the face standard identification information, judging that the first face basic identification information is matched with the face standard identification information.
3. The method of claim 2,
the first face base identification information includes: first cheekbone basic identification information, first eye basic identification information, first nose basic identification information, first eyebrow basic identification information, first ear basic identification information and first tooth identification information; wherein the first cheekbone basic identification information is first priority matching information; the first eye basic identification information, the first nose basic identification information and the first eyebrow basic identification information are second priority matching information; the first ear basic identification information and the first tooth identification information are third priority matching information;
the face standard identification information includes: cheekbone standard identification information, eye standard identification information, nose standard identification information, eyebrow standard identification information, ear standard identification information, and tooth standard identification information.
4. The method of claim 1, wherein said determining whether the first face basic identification information is face basic identification information obtained after face-lifting of the target object comprises:
constructing an adjustment range area of the standard identification information;
acquiring comparison difference information of the first face basic identification information relative to the face standard identification information;
judging whether the comparison difference information is between the adjustment range areas;
if so, the first face basic identification information is the face basic identification information obtained after the target object is subjected to face-lifting.
5. The method of claim 4, wherein said restoring said first face base identification information to a second face base identification information of said target object prior to face beautification comprises:
correcting the first face basic identification information according to the comparison difference information;
and determining the corrected face basic identification information as the second face basic identification information.
6. A system for identifying a target object, applied to a drone, characterized in that it comprises:
the first position module is used for obtaining first prejudgment position information of the target object;
the acquisition module is used for acquiring first face basic identification information of the target object according to the first pre-judging position information;
the first comparison module is used for comparing the first face basic identification information with face standard identification information in a standard database which is constructed in advance;
the first locking module is used for sending a locking instruction to lock the target object if the first face basic identification information is matched with the face standard identification information;
the judging module is used for judging whether the first face basic identification information is the face basic identification information obtained after the face of the target object is rectified if the first face basic identification information is not matched with the face standard identification information;
the restoring module is used for restoring the first face basic identification information to second face basic identification information of the target object before face-lifting if the first face basic identification information is the second face basic identification information;
the second comparison module is used for comparing the second face basic identification information with the face standard identification information;
the second locking module is used for sending a locking instruction to lock the target object if the second face basic identification information is matched with the face standard identification information;
and the second position module is used for acquiring second pre-judging position information of the target object if the second face basic identification information is not matched with the face standard identification information.
7. The system of claim 6, wherein the first comparison module comprises:
the first comparison submodule is used for carrying out priority classification on the first face basic identification information, and the classified first face basic identification information comprises first priority matching information, second priority matching information and third priority matching information;
the second comparison sub-module is used for comparing the first face basic identification information with the face standard identification information according to the priority order; the priority order includes: an order from the first priority matching information, the second priority matching information to the third priority matching information;
the first locking module is further configured to:
if the first priority matching information is matched with the face standard identification information, judging that the first face basic identification information is matched with the face standard identification information;
or,
and if the second priority matching information and the third priority matching information are both matched with the face standard identification information, judging that the first face basic identification information is matched with the face standard identification information.
8. The system of claim 7,
the first face base identification information includes: first cheekbone basic identification information, first eye basic identification information, first nose basic identification information, first eyebrow basic identification information, first ear basic identification information and first tooth identification information; wherein the first cheekbone basic identification information is first priority matching information; the first eye basic identification information, the first nose basic identification information and the first eyebrow basic identification information are second priority matching information; the first ear basic identification information and the first tooth identification information are third priority matching information;
the face standard identification information includes: cheekbone standard identification information, eye standard identification information, nose standard identification information, eyebrow standard identification information, ear standard identification information, and tooth standard identification information.
9. The system of claim 6, wherein the determining module further comprises:
the first judgment submodule is used for constructing an adjustment range area of the standard identification information;
the second judgment submodule is used for acquiring comparison difference information of the first face basic identification information relative to the face standard identification information;
a third judging submodule, configured to judge whether the comparison difference information is between the adjustment range regions;
and the fourth judgment sub-module is used for judging whether the first face basic identification information is the face basic identification information obtained after the face-lifting of the target object.
10. The system of claim 9, wherein the reduction module comprises:
the first reduction sub-module is used for correcting the first face basic identification information according to the comparison difference information;
and the second restoring submodule is used for determining the modified face basic identification information as the second face basic identification information.
CN201611258394.6A 2016-12-30 2016-12-30 For the method and system being identified for destination object Pending CN106815568A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611258394.6A CN106815568A (en) 2016-12-30 2016-12-30 For the method and system being identified for destination object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611258394.6A CN106815568A (en) 2016-12-30 2016-12-30 For the method and system being identified for destination object

Publications (1)

Publication Number Publication Date
CN106815568A true CN106815568A (en) 2017-06-09

Family

ID=59110548

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611258394.6A Pending CN106815568A (en) 2016-12-30 2016-12-30 For the method and system being identified for destination object

Country Status (1)

Country Link
CN (1) CN106815568A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101226653A (en) * 2007-01-18 2008-07-23 中国科学院自动化研究所 Rapid go-aboard system and method based on id card and biological characteristic recognition technique
CA2893590A1 (en) * 2012-12-14 2014-06-19 The J. David Gladstone Institutes Automated robotic microscopy systems
CN104021380A (en) * 2014-05-02 2014-09-03 香港应用科技研究院有限公司 Method and device performing facial recognition through calculating device
CN104376594A (en) * 2014-11-25 2015-02-25 福建天晴数码有限公司 Three-dimensional face modeling method and device
CN104796611A (en) * 2015-04-20 2015-07-22 零度智控(北京)智能科技有限公司 Method and system for remotely controlling unmanned aerial vehicle to implement intelligent flight shooting through mobile terminal
CN104851123A (en) * 2014-02-13 2015-08-19 北京师范大学 Three-dimensional human face change simulation method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101226653A (en) * 2007-01-18 2008-07-23 中国科学院自动化研究所 Rapid go-aboard system and method based on id card and biological characteristic recognition technique
CA2893590A1 (en) * 2012-12-14 2014-06-19 The J. David Gladstone Institutes Automated robotic microscopy systems
CN104851123A (en) * 2014-02-13 2015-08-19 北京师范大学 Three-dimensional human face change simulation method
CN104021380A (en) * 2014-05-02 2014-09-03 香港应用科技研究院有限公司 Method and device performing facial recognition through calculating device
CN104376594A (en) * 2014-11-25 2015-02-25 福建天晴数码有限公司 Three-dimensional face modeling method and device
CN104796611A (en) * 2015-04-20 2015-07-22 零度智控(北京)智能科技有限公司 Method and system for remotely controlling unmanned aerial vehicle to implement intelligent flight shooting through mobile terminal

Similar Documents

Publication Publication Date Title
CN106778669A (en) The method and device that destination object is identified is carried out based on unmanned plane
US9547908B1 (en) Feature mask determination for images
US11227158B2 (en) Detailed eye shape model for robust biometric applications
CN108135469B (en) Eyelid shape estimation using eye pose measurements
EP3338217B1 (en) Feature detection and masking in images based on color distributions
US20190246036A1 (en) Gesture- and gaze-based visual data acquisition system
US10559062B2 (en) Method for automatic facial impression transformation, recording medium and device for performing the method
US9978119B2 (en) Method for automatic facial impression transformation, recording medium and device for performing the method
CN110287900B (en) Verification method and verification device
US11042725B2 (en) Method for selecting frames used in face processing
KR20190129826A (en) Biometrics methods and apparatus, systems, electronic devices, storage media
KR20190028349A (en) Electronic device and method for human segmentation in image
KR20220004754A (en) Neural Networks for Head Pose and Gaze Estimation Using Photorealistic Synthetic Data
KR20190047442A (en) Method of removing reflection area, eye-tracking method and apparatus thereof
CN110956114A (en) Face living body detection method, device, detection system and storage medium
US10764563B2 (en) 3D enhanced image correction
JP5087037B2 (en) Image processing apparatus, method, and program
CN112214773B (en) Image processing method and device based on privacy protection and electronic equipment
CN113657195A (en) Face image recognition method, face image recognition equipment, electronic device and storage medium
CN111177677A (en) Method for facial authentication of a watch wearer
CN106874839A (en) The method and device of facial information identification
CN107066923A (en) The method and apparatus of target identification is carried out based on unmanned plane
CN111860045B (en) Face changing method, device, equipment and computer storage medium
WO2021026848A1 (en) Image processing method and device, and photographing apparatus, movable platform and storage medium
WO2020193972A1 (en) Facial analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170609