WO2020252688A1 - Reconnaissance de cible sur la base d'informations d'images et système de reconnaissance de cible sur la base d'informations d'images - Google Patents

Reconnaissance de cible sur la base d'informations d'images et système de reconnaissance de cible sur la base d'informations d'images Download PDF

Info

Publication number
WO2020252688A1
WO2020252688A1 PCT/CN2019/091857 CN2019091857W WO2020252688A1 WO 2020252688 A1 WO2020252688 A1 WO 2020252688A1 CN 2019091857 W CN2019091857 W CN 2019091857W WO 2020252688 A1 WO2020252688 A1 WO 2020252688A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
image information
unmanned vehicle
candidates
target candidates
Prior art date
Application number
PCT/CN2019/091857
Other languages
English (en)
Inventor
Xiao-long QIN
Original Assignee
Powervision Tech (Suzhou) Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Powervision Tech (Suzhou) Ltd. filed Critical Powervision Tech (Suzhou) Ltd.
Priority to PCT/CN2019/091857 priority Critical patent/WO2020252688A1/fr
Publication of WO2020252688A1 publication Critical patent/WO2020252688A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0094Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U10/00Type of UAV
    • B64U10/10Rotorcrafts
    • B64U10/13Flying platforms

Definitions

  • a target recognition system based on image information can be integrated herein.
  • unmanned aerial vehicle with a target recognition system can search for a certain target, such as a building, a mountain, or a person in a predetermined area.
  • the target recognition system can further help to optimize the efficiency of recognizing target by utilizing algorithms of processing image information to narrow down the potential candidates list, and execute a more detailed investigation on aforementioned selected candidates.
  • FIG. 1 is a block diagram illustrating a method of controlling an unmanned vehicle, according to some embodiments of present disclosure.
  • FIG. 2 is a block diagram illustrating a method of controlling an unmanned vehicle, according to some embodiments of present disclosure.
  • FIG. 3 is a schematic diagram of an unmanned vehicle receiving an instruction pertinent to a target zone, according to some embodiments of present disclosure.
  • FIG. 4A is a schematic diagram of a unmanned vehicle moving within a target zone while an imaging module capturing image information, according to some embodiments of present disclosure.
  • FIG. 4B is a schematic diagram illustrating an imaging module of a unmanned vehicle capturing an overview including multiple target candidates within a target zone, according to some embodiments of present disclosure.
  • FIG. 5 is a schematic diagram illustrating a first group of target candidates generated by a target recognition system, according to some embodiments of present disclosure.
  • FIG. 6 is a schematic diagram illustrating an unmanned vehicle moving along an arbitrary path while capturing multiple perspective image information of each of the target candidates in a first group, according to some embodiments of present disclosure.
  • FIG. 7 is a schematic diagram of a target recognition system generating one or more navigation paths bound for a target, according to some embodiments of present disclosure.
  • FIG. 8A is a schematic diagram illustrating an imaging module, a positioning module, a processing unit, and a storage module of the target recognition system, according to some embodiments of present disclosure.
  • FIG. 8B is a block diagram illustrating an unmanned vehicle, a remote server, and a user terminal of the target recognition system, according to some embodiments of present disclosure.
  • FIG. 9 is a schematic diagram of an unmanned aerial vehicle (UAV) receiving an instruction pertinent to a target zone, according to some embodiments of present disclosure
  • FIG. 10A is a schematic diagram of a UAV moving within a target zone as while imaging module capturing image information, according to some embodiments of present disclosure.
  • FIG. 10B is a schematic diagram illustrating an imaging module of a UAV capturing an overview including multiple target candidates within a target zone, according to some embodiments of present disclosure.
  • FIG. 11 is a schematic diagram illustrating a first group of target candidates generated by a network data processing system, according to some embodiments of present disclosure.
  • FIG. 12 is a schematic diagram illustrating a UAV moving around a target candidate in a first group while capturing image information of the target candidates, according to some embodiments of present disclosure.
  • FIG. 13 is a schematic diagram of a target recognition system generating one or more navigation paths bound for a target, according to some embodiments of present disclosure.
  • One objective of the present disclosure is to provide a system for controlling an unmanned vehicle.
  • the system includes: (1) an unmanned vehicle and (2) one or more processors.
  • the one or more processors individually or collectively configured to: (1) receive an instruction pertinent to a target zone; (2) receive, from the unmanned vehicle, image information of a plurality of target candidates within the target zone; (3) identify a first group of the plurality of target candidates by processing the image information of the plurality of target candidates; and (4) receive, from the unmanned aerial vehicle, image information of each of the plurality of target candidates in the first group from at least two perspectives.
  • One objective of the present disclosure is to provide one or more non-transitory computer-readable storage media having stored thereon executable instructions that, when executed by one or more processors of a system for controlling an unmanned vehicle, causing the system to: (1) receive an instruction pertinent to a target zone; (2) send a first command to the unmanned vehicle for navigating around the target zone; (3) receive, from the unmanned vehicle, image information of a plurality of target candidates within the target zone; (4) identify a first group of the plurality of target candidates by processing the image information of the plurality of target candidates; and (5) receive, from the unmanned vehicle, image information of each of the plurality of target candidates in the first group from at least two perspectives.
  • One objective of the present disclosure is to provide a method of controlling an unmanned vehicle.
  • the method includes: (1) receiving an instruction pertinent to a target zone; (2) sending a first command to the unmanned vehicle for navigating around the target zone; (3) receiving, from the unmanned vehicle, image information of a plurality of target candidates within the target zone; (4) identifying a first group of the plurality of target candidates by processing the image information of the plurality of target candidates; and (5) receiving, from the unmanned vehicle, image information of each of the plurality of target candidates in the first group from at least two perspectives.
  • first and second features are formed in direct contact
  • additional features may be formed between the first and second features, such that the first and second features may not be in direct contact
  • present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed.
  • spatially relative terms such as “beneath, ” “below, ” “lower, ” “above, ” “upper” and the like, may be used herein for ease of description to describe one element or feature’s relationship to another element (s) or feature (s) as illustrated in the figures.
  • the spatially relative terms are intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures.
  • the apparatus may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein may likewise be interpreted accordingly.
  • Target recognition system can help locating a predetermined target, as the system are gradually developed to an extent that the system may be applied to target tracking, target searching, rescuing, surveillance, navigation, etc.
  • current target recognition system are still lack of adequate method integrated to serve the demand of civil application, as the efficiency and liability of target recognition in a certain area still has a lot of room to improve.
  • Present disclosure provides a target recognition system that facilitates the process of target recognition and a method for executing target recognition using the target recognition system described herein.
  • a method 100 for controlling an unmanned vehicle 1.
  • the method 100 may include selecting a target image (step 101) , receiving an instruction pertinent to a target zone (step 102) , sending a first command to the unmanned vehicle 1 for navigating around the target zone 301 (step 102') , receiving, from the unmanned vehicle 1, image information of a plurality of target candidates within the target zone 301 (step 103) , identifying a first group 302 of the plurality of target candidates by processing the image information of the plurality of target candidates (step 104) , and receiving, from the unmanned vehicle 1, image information of each of the plurality of target candidates in the first group from at least two perspectives (step 105) .
  • the method 100 can be performed by the target recognition system, wherein the target recognition system may include a user terminal (e.g. a mobile device or a mobile application) or a remote server (e.g. a cloud server) .
  • the target recognition system may include a user terminal (e.g. a mobile device or a mobile application) or a remote server (e.g. a cloud server) .
  • some of the steps in the method 100 can be performed by one or more processors or modules mounted on the unmanned vehicle 1.
  • the unmanned vehicle 1 can be an unmanned aerial vehicle, but the present disclosure is not limited thereto.
  • a target image or image information derived from the target image can be selected by a user terminal 2, thence the target image or the target image information will serve as the comparison subject matter of the target recognition process.
  • target image can be an image stored or temporarily stored in user terminal 2
  • image information derived from the target image can be processed data of target image with various background noise reduction processing and/or subject matter enhancement processing.
  • the target recognition system can receive the target image or image information derived from the target image from the user terminal 2 via signal transmission, for example, Wi-Fi, Bluetooth, Radio frequency (RF) , digital signal transmission, electrical communication, optical communication, or the like.
  • RF Radio frequency
  • the target recognition system receives an instruction pertinent to the target zone 301, as shown in Fig. 3.
  • the coverage of the target recognition system can be derived from the scope of the target zone 301.
  • the target zone 301 can be one designated area or multiple designated areas combined.
  • Target candidates within the target zone 301, illustrated as 301a' through 301f' in Fig. 3, could be a specific set or an aggregation of multiple sets, either static or moving, such as sets of buildings or a group of human, animals, or vehicles in motion.
  • the target recognition system sends a first command to the unmanned vehicle 1, instructing the unmanned vehicle 1 to navigate around the target zone 301 in accordance to the first command.
  • the target recognition system can transmit the first command via signal transmission, for example, Wi-Fi, Bluetooth, Radio frequency (RF) , digital signal transmission, electrical communication, optical communication, or the like.
  • the receiving of the instruction pertinent to the target zone 301 includes receiving image information related to target candidates (e.g. one or more target object) and location information related to the target zone 301 from a user terminal.
  • a user of the target recognition system can provide pictures of the target object and an address or an administration area name associated to the target object to the system.
  • the target recognition system receives image information of a plurality of target candidates within the target zone 301.
  • the imaging module 11 mounted on the unmanned vehicle 1 can be utilized to capture images, wherein the imaging module 11 may include one or multiple cameras, camcorders, or light sensors.
  • the image information will be received by one of the processors in the target recognition system for further processing.
  • the image information can be received by the processor of user terminal 2 and/or the processor of the unmanned vehicle 1 per se.
  • the image information can be received by the processor of a network data processing system 63, as illustrated in Fig. 11. In some embodiments, as illustrated in Fig.
  • the unmanned vehicle 1 can move within the target zone 301 as the imaging module 11 captures image information, such as pictures, images, video, livestream, or any data derived from these forms, of each of the target candidates.
  • the imaging module 11 may also capture an overview including multiple target candidates within the target zone 301, as illustrated in Fig. 4B.
  • the target recognition system processes the image information of the plurality of target candidates acquired in step 103 in order to identify a first group 302, including 301b', 301c', and 301e', from all target candidates 301a' through 301f'.
  • the selection of the first group 302 will be based on the method executed by one or more processors of the target recognition system.
  • the processor can be integrated inside the user terminal 2, mounted on the unmanned vehicle 1, or executed in the network data processing system 63 as shown in Fig. 11.
  • the method can be executed individually by one of the aforesaid processors, and the method can also be executed collectively by a combination of the aforesaid processors.
  • the recognition process executed by at least one of the aforesaid processors is considered an automatic mode (hereinafter an "auto mode" ) , as opposed to a manual mode, which would be described in the following disclosure.
  • the method compares the image information of the plurality of target candidates with the target image or image information derived from the target image.
  • the method comprises generating a matching index corresponding to the image information of the plurality of the target candidates, illustrated as 301a' through 301f' in Fig. 5.
  • a threshold value pertinent to a value of the matching index could be designated as the cut-off standard of arbitrating the degree of match.
  • structural similarity SSIM
  • the SSIM method can generate a numerical value between 0 and 1 by weighting the combination of comparative measures, such as feature point, luminance, contrast, and structure, etc. The numerical value has a positive correlation to the similarity between two image information, as the highest value 1 indicates two images are identical, and the lowest value 0 indicates two images have no correlation.
  • the comparison method is not limited herein; any method including generating a matching index corresponding to the image information of the plurality of the target candidates is within the contemplated scope of present disclosure.
  • perceptual hash algorithm can be used as another method for quantifying the similarity between two images. Perceptual hash algorithm generates digital fingerprint to each of the images and compares the Hamming distance, i.e., a similarity calibration to digital fingerprints associated with different images, among target image and target candidate images. The greater the Hamming distance indicates lower similarity of the images and vice versa.
  • the target candidate could be selected into the first group 302, illustrated as 301b', 301c', and 301e' in Fig. 5.
  • the method of first group 302 selection can filter out a predetermined number of candidates with several highest values of matching index.
  • the unmanned vehicle 1 could move along a path 3011 while capturing image information of each of the target candidates in the first group 302. Thence the target recognition system receives the captured image information from the unmanned vehicle 1.
  • the path 3011 can be one continuous path moving around each of the target candidates in the first group 302 to acquire image information of each of the plurality of target candidates in the first group 302 from at least two perspectives in order to enhance the accuracy of target recognition.
  • obtaining image information from multiple perspectives includes angle adjustment of the imaging module 11 on the unmanned vehicle 1 or path control of the unmanned vehicle 1.
  • unmanned vehicle 1 may fix at a single position with respect to the target candidate and capturing image information of the target candidate by different camera tilting angles.
  • unmanned vehicle 1 may encircle a target candidate and capture images information of the target candidate from different locations with respect to the target candidate, as shown in the portion of path 3011 encircling target candidate 301b' in Fig. 6.
  • the moveable object 1 may follow an ascend path, a descend path, a zoom in path, or a zoom out path at a side of the target candidate for multiple perspective image capturing.
  • the target recognition system may further comprise features as shown in method 100'of Fig. 2.
  • target recognition system further processes multiple perspective image information of the target candidates in the first group 302 captured by unmanned vehicle 1 following a path 3011 around target candidates and identify a second group from the target candidates in the first group.
  • the second group includes less target candidates than those in the first group, representing a higher match selection.
  • comparison method previously mentioned in step 104 can be used in step 106 with alternative threshold value of the matching index.
  • a second threshold value of the matching index used in step 106 can be stricter than a first threshold value of the matching index inputted in step 104, in order to optimize the matching result.
  • the target recognition system when no target candidate in the first group passes the second threshold value of the matching index used in step 106, the target recognition system recognizes the target candidate in the first group with highest value of matching index as the target, and carries on with step 111. In some embodiments, when one target candidate in the first group passes the second threshold value of the matching index inputted in step 106, the target recognition system determines said one target candidate in the first group as the target, and carries on with step 111.
  • steps 108, 109, and 110 when more than one target candidates in the first group passes the second threshold value of the matching index inputted in step 106 and user requires more refined results, steps 108, 109, and 110 will be performed.
  • the target recognition system will generate a second group having more than one target candidates passing the second threshold value of the matching index from the first group.
  • the target recognition system receives more refined image information of target candidates in the second group from multiple perspectives. Similar to step 105, step 109 provides greater details of the target candidates in the second group by capturing further image information therefrom.
  • target recognition system processes more image information of target candidates in the second group by using methods similar to step 106. Steps 108, 109, 110 can be repeated until the target was identified, or until a user suspends the refining process and switches to the manual mode, or terminated in response to a predetermined criteria is met.
  • step 111 upon a target among the target candidates is identified, for example target 303, step 111 will be performed.
  • the target recognition system transmits a second command for navigating the unmanned vehicle 1 to a predetermined location via a first path 31 based on the location information.
  • a positioning module 12 shown in Fig. 8A may provide the location information of the target 303 and navigate the unmanned vehicle 1 from the target 303 to the predetermined location via the first path 31.
  • the positioning module 12 includes a navigation device, such as Global Positioning System (GPS) or satellites system.
  • the first path 31 will be suggested by a positioning module 12 disposed on the unmanned vehicle 1, or in combination with suggestion from navigating applications installed on the user terminal 2 or the network data processing system 63 as shown in Fig. 11.
  • the target recognition system may receive image information of the first path 31 by the imaging module 11.
  • the unmanned vehicle 1 may move along the first path 31 and capture multiple images along the first path 31 for the user to decide whether the first path 31 is an ideal path bound for the target 303 considering several factors including road construction or traffic conditions. If necessary, the target recognition system can generate other paths, such as a second path 32, bound for target 303 to provide alternative paths for the user's choice.
  • Fig. 8A is a schematic illustration by way of block diagram of a system for controlling an unmanned vehicle 1.
  • a system for controlling the unmanned vehicle 1 comprises one or more processing unit 13 and one or more imaging module 11 disposed on the unmanned vehicle 1.
  • the imaging module 11 may comprise one or multiple camera, camcorders, or light sensors.
  • the imaging module 11 may also be able to capture pictures, images, video, or livestream, and transmit these image files to a processor unit 13 for further processing.
  • the one or more processing unit 13 comprises one or more processors individually or collectively configured to receive an information pertinent to a target zone 301, receive image information of a plurality of target candidates within the target zone 301 captured by the imaging module 11 of the unmanned vehicle 1, identify the first group 302 of the plurality of the target candidates by processing the image information of the plurality of target candidates, and receive image information of each of the plurality of target candidates in the first group 302 from at least two perspectives captured by the imaging module 11 of the unmanned vehicle 1.
  • the one or more processors may further receive information pertinent to the target zone 301, generate the coverage of the target zone 301 and determine the moving paths such as the first path 31 and the second path 32 previously discussed.
  • the system may further comprise one or more positioning module 12 configured to generate location information of each of the target candidates in the first group 302 shown in Fig. 5.
  • the positioning module 12 can comprise one or more navigation device, such as GPS or satellites system.
  • the positioning module 12 can be disposed on the unmanned vehicle 1 in order to provide the location of each of the plurality of target candidates in the first group 302.
  • the positioning module 12 can alternatively be a part of the network data processing system 63 as shown in Fig. 11.
  • the network data processing system 63 can receive the image information captured by the unmanned vehicle 1 for location determination.
  • the processing unit 13 also can execute algorithms. Similar to the aforementioned step 104 in Fig. 1, the processing unit 13 can compare the image information of the plurality of target candidates with the target image information and generate a matching index corresponding to the image information of the plurality of target candidates. In some embodiments, the processing unit 13 can generate a first group 302 of the target candidates by confirming if the matching index of any of the target candidates is greater than a first threshold value. Similarly, the processing unit 13 can generate a second group of the target candidates by confirming if the matching index of any of the target candidates is greater than a second threshold value.
  • the processor unit 13 can be integrated inside the user terminal 2, disposed on the unmanned vehicle 1, or running on the network data processing system 63 as shown in Fig. 11. The algorithm can be executed individually by one of the aforesaid processors, and the algorithm can also be executed collectively by a combination of the aforesaid processors.
  • a storage module 14 can store executable instructions or information generated by the processor unit 13.
  • the storage module 14 can comprise one or more non-transitory computer readable storage media, such as random access memory, disks, or memory devices.
  • the executable instructions of one or more processors of a system includes receiving an instruction pertinent to the target zone 301, sending a first command to the unmanned vehicle 1 for navigating around the target zone 301, receiving image information of a plurality of target candidates within the target zone 301 from the unmanned vehicle 1, identifying the first group 302 of the plurality of the target candidates by processing the image information of the plurality of target candidates, and receiving image information of each of the plurality of target candidates in the first group 72 from at least two perspectives from the unmanned vehicle 1.
  • the storage modules 14 can be integrated on the unmanned vehicle 1 and/or on the user terminal 2.
  • a network data storage system may also serve as an option of storing executable instructions or information generated by the processor unit 13.
  • the instructions stored can be executed individually or collectively by one or more of the aforesaid storage devices in the storage module 14.
  • the aforesaid image information can be obtained by using an imaging module 11 of the unmanned vehicle 1, but the present disclosure is not limited thereto.
  • Fig. 8B is a schematic illustration by way of block diagram of a system for controlling a unmanned vehicle 1.
  • the system for controlling an unmanned vehicle 1 comprises an unmanned vehicle 1, a user terminal 2, and one or multiple remote server 15.
  • the unmanned vehicle 1 comprises a processing unit 13', a memory unit 14', and a positioning module 12' coupled with the processing unit 13' and the memory unit 14' and configured to generate location information of each of the target candidates.
  • the remote server 15 comprises an image manage unit 151 to store image data, and an image analysis unit 152 h.
  • the image analysis unit 152 includes an image comparison and analysis module 1521 and a self-improving module 1522.
  • the user terminal 2 includes a first transceiver 21, a second transceiver 22, and a display module 23.
  • the imaging module 11' may comprise one or multiple camera, camcorders, or light sensors.
  • the imaging module 11' may also be able to capture pictures, images, video, or livestream, and transmit these image files to the processor unit 13' for further processing.
  • the processing unit 13' comprises one or more processors individually or collectively configured to receive information pertinent to the target zone 301, receive image information of the plurality of target candidates within the target zone 301 captured by the imaging module 11' of the unmanned vehicle 1, and receive image information of each of the plurality of target candidates in the first group 302 from at least two perspectives captured by the imaging module 11' of the unmanned vehicle 1.
  • the one or more processors may further receive information pertinent to the target zone 301, generate the coverage of the target zone 301 and determine the moving paths such as the first path 31 and the second path 32, as previously discussed.
  • the unmanned vehicle 1 may further include the positioning module 12' configured to generate location information of each of the target candidates.
  • the positioning module 12' may comprise one or more navigation device, such as GPS or satellites system.
  • the positioning module 12' is coupled with the processing unit 13' and the memory unit 14' in order to generate location information of each of the plurality of target candidate.
  • the memory unit 14' disposed on the unmanned vehicle 1 can store executable instructions or information generated by the processor unit 13'.
  • the memory 14' may comprise one or more non-transitory computer readable storage media, such as random access memory, disks, or memory devices.
  • the transceiver 22 of the user terminal 2 transmits and receives information with the transceiver unit 16A of the unmanned vehicle 1.
  • the transceiver 22 may be implemented with wireless communication protocol, for example, Wi-Fi, or the like.
  • the display module 23 may present images, audio files, videos, livestream of the image information captured by the unmanned vehicle 1 to a user. In some embodiments, the display module 23 may also comprise various suitable user interfaces.
  • the unmanned vehicle 1 may transmit image information acquired by the imaging module 11', location information generated by positioning module 12', navigation path generated by positioning module 12', and other data processed and generated by processing unit 13' to the transceiver 22 of the user terminal 2.
  • the transceiver unit 16A of the unmanned vehicle 1 may receive instructions or feedbacks from the user terminal 2 from the transceiver 22 of the user terminal 2.
  • the remote server 15 is connected to the transceiver 22 of the user terminal 2.
  • the transceiver 22 may transmit information received from the transceiver 22 or information generated by the user terminal 2 to a transceiver unit 16B of the remote server 15.
  • the remote server 15 comprises an image manage unit 151 configured to docket image data, and the image analysis unit 152.
  • the image analysis unit 152 is configured to compare the image information of the plurality of target candidates with a target image information, and configured to optimize the outcome generated by the imaging comparison and analysis module 1521.
  • the image manage unit 151 may provide access to image information from network data processing system, or cloud. For example, a picture of a building on the internet, a picture of a human in a camera, or a painting of a mountain.
  • the user may select an image information from the network data processing system as a target image, and the target image information will serve as the comparison subject matter of the target recognition process.
  • the imaging comparison and analysis module 1521 may compare the target image information provided from the image manage unit 151 and target candidate image information provided from the first transceiver 21 of the user terminal 2.
  • generation of the matching index comprises, but not limited to, dividing the image into multiple segments, and assigning weight coefficient to each of the segments.
  • the method also comprises generating a matching index accounting on a collection of each of the weighted segments.
  • a threshold value of the matching index could be designated as the cut-off standard of arbitrating the degree of match. For example, SSIM method or the perceptual hash algorithm previously described can be used to quantify the similarity between two images.
  • the matching index of a target candidate derived from the algorithm is greater than the designated threshold value, then the target candidate will be identified.
  • the method 100 can also be provided for controlling an unmanned aerial vehicle 6.
  • the method 100 include selecting a target image (step 101) , receiving an instruction pertinent to a target zone (step 102) , sending a first command to the unmanned aerial vehicle 6 for navigating around the target zone 71 (step 102') , receiving, from the unmanned aerial vehicle 6, image information of a plurality of target candidates within the target zone 71 (step 103) , identifying a first group 72 of the plurality of target candidates by processing the image information of the plurality of target candidates (step 104) , and receiving, from the unmanned aerial vehicle 6, image information of each of the plurality of target candidates in the first group from at least two perspectives (step 105) .
  • the method 100 can be performed by the target recognition system, wherein the target recognition system may include an user terminal (e.g. a mobile device or a mobile application) or a remote server (e.g. a cloud server) .
  • the target recognition system may include an user terminal (e.g. a mobile device or a mobile application) or a remote server (e.g. a cloud server) .
  • some of the steps in the method 100 can be performed by one or more processors or modules mounted on the unmanned aerial vehicle 6.
  • a target image or image information derived from the target image can be selected by a user terminal 2, thence the target image or the target image information will serve as the comparison subject matter of the target recognition process.
  • target image can be an image stored or temporarily stored in user terminal 2
  • image information derived from the target image can be processed data of target image with various background noise reduction processing and/or subject matter enhancement processing.
  • the target recognition system can receive the target image or image information derived from the target image from the user terminal 2 via signal transmission, for example, Wi-Fi, Bluetooth, Radio frequency (RF) , digital signal transmission, electrical communication, optical communication, or the like.
  • RF Radio frequency
  • the target recognition system receives an instruction pertinent to the target zone 71, shown in Fig. 9.
  • the coverage of the target recognition system can be derived from the scope of the target zone 71.
  • the target zone 71 can be one designated area or multiple designated areas combined.
  • Target candidates within the target zone 71, illustrated as 71a' through 71d' in Fig. 10A, could be a specific set of buildings.
  • the target recognition system sends a first command to the unmanned aerial vehicle 6, instructing the unmanned aerial vehicle 6 to navigate around the target zone 71 in accordance to the first command.
  • the target recognition system can transmit the first command via signal transmission, for example, Wi-Fi, Bluetooth, Radio frequency (RF) , digital signal transmission, electrical communication, optical communication, or the like.
  • the receiving of the instruction pertinent to the target zone 71 includes receiving image information related to target candidates (e.g. one or more target object) and location information related to the target zone 71 from a user terminal.
  • a user of the target recognition system can provide pictures of the target object and an address or an administration area name associated to the target object to the system.
  • the target recognition system receives image information of a plurality of target candidates within the target zone 71 from the unmanned aerial vehicle 6.
  • the imaging module 61 is utilized to receive image information of a plurality of target candidates within the target zone 71.
  • the imaging module 61 may include one or multiple camera, camcorders, or light sensors. After the imaging module 61 captures image information, the image information will be received by one of the processors in the target recognition system for further processing. For example, the image information of target candidates can be received by the processor of user terminal 2 and/or the processor of the unmanned vehicle 1 per se.
  • the image information of target candidates can be received by the processor of a network data processing system 63, as illustrated in Fig. 11.
  • the image information of target candidates can be first transmitted to user terminal 2 and then uploaded to the network data processing system 63.
  • the unmanned aerial vehicle 6 can move within the target zone 71 as the imaging module 61 captures image information such as pictures, images, video, livestream, or any data derived from these forms, of each of the target candidates.
  • the imaging module 61 may also capture an overview including multiple target candidates within the target zone 71, as illustrated in Fig. 10B.
  • the target recognition system processes the image information of the plurality of target candidates acquired in step 103 in order to identify a first group 72, including 71c' and 71d', from all target candidates, illustrated as 71a' through 71d' within the target zone 71 in Fig. 10A.
  • the selection of the first group 72 will be based on the method executed by one or more processors of the target recognition system.
  • the one or more processors can be integrated inside the user terminal 2, mounted on the unmanned aerial vehicle 6, or executed in network data processing system 63.
  • the method can be executed individually by one of the aforesaid processors, and the method can also be executed collectively by a combination of the aforesaid processors.
  • the network data processing system 63 compares the image information of the plurality of target candidates 71a' through 71d' with the target image information and generates the first group 72. The information of the first group then transmitted to user's terminal 2 for user's information. An instruction of capturing more image information of the target candidates 71c' and 71d' in the first group pertinent to different perspectives is then being sent to unmanned aerial vehicle 6 from user's terminal 2.
  • image information pertinent to different perspectives captured by the unmanned aerial vehicle 6 can be transmitted back to user's terminal 2 and then uploaded to network data processing system 63 for next round of comparison between more image information of the target candidates 71c' and 71d' in the first group and the target image information.
  • the next round of comparison may generate a second group from the first group of the target candidates 71c' and 71d', as will be subsequently described.
  • the method compares the image information of the plurality of target candidates 71a' through 71d' with the target image information.
  • generation of the matching index includes breaking down the image information into multiple segments, and assigning weight coefficient to each of the segments.
  • the method includes generating a matching index based on the weight coefficient of each of the segments corresponding to the image information of the plurality of the target candidates 71a'through 71d'.
  • a threshold value pertinent to a value of the matching index could be designated as the cut-off standard of arbitrating the degree of match. For example, SSIM method or the perceptual hash algorithm previously described can be used to quantify the similarity between two images.
  • the method of first group 302 selection can filter out a predetermined number of candidates with several highest values of matching index.
  • the user terminal 2 may receive image information of each target candidates from the unmanned aerial vehicle 6.
  • a user may choose to identify the first group 72 with the aid of methods mentioned in step 104, that is, identifying the first group 302 via an auto mode.
  • the unmanned aerial vehicle 6 could move along an arbitrary path 7011 as long as the path permitting the unmanned aerial vehicle 6 to capture image information of each of the target candidates 71c' and 71d' in the first group 72. Thence the target recognition system receives the captured image information from the unmanned vehicle 1.
  • the path 7011 can be one continuous path around each of the target candidates in the first group 72 to acquire image information of each of the plurality of target candidates in the first group 72 from at least two perspectives in order to enhance the accuracy of target recognition.
  • obtaining image information from multiple perspectives includes angle adjustment of the imaging module 61 on the unmanned aerial vehicle 6 or path control of the unmanned aerial vehicle 6.
  • unmanned aerial vehicle 6 may fix at a single position with respect to target candidate 71d', or the building, and capturing image information of the target candidate 71d' by different camera tilting angles.
  • unmanned aerial vehicle 6 may encircle the target candidate 71d' as shown in Fig. 12 and capture images information 72a, 72b, 72c, and 72d of the target candidate 71d' from different locations with respect to the building, and hence multiple perspectives of the target candidate 71d' can be obtained.
  • the unmanned aerial vehicle 6 may follow an ascend path, a descend path, a zoom in path, or a zoom out path at a side of the target candidate 71d' for multiple perspective image capturing.
  • the target recognition system may further include features, as shown in method 100' of Fig. 2.
  • target recognition system further processes multiple perspective image information of the target candidates in the first group 72 captured by unmanned aerial vehicle 6 around target candidates and identify a second group from the target candidates 71c' and 71d' in the first group.
  • the second group includes less target candidates than those in the first group, representing a higher match selection.
  • comparison method previously mentioned in step 104 can be used in step 106 with alternative threshold value of the matching index.
  • a second threshold value of the matching index inputted in step 106 can be stricter than a first threshold value of the matching index inputted in step 104, in order to optimize the matching result.
  • the target may be determined in some instances. Information of the target is then passed to user's terminal 2, on which a navigation application is automatically triggered and a path bound to the target from user's location or a location designated by the user is suggested to the user on user's terminal 2.
  • the target recognition system when no target candidate in the first group passes the second threshold value of the matching index inputted in step 106, the target recognition system recognizes the target candidate in the first group with highest value of matching index as the target, and carries on with step 111. In some embodiments, when target candidate 71c' in the first group passes the second threshold value of the matching index inputted in step 106, the target recognition system recognizes said target candidate 71c' in the first group as the target 73, and carries on with step 111.
  • steps 108, 109, and 110 will be activated.
  • the target recognition system will generate a second group having target candidates 71c' and 71d' passing the second threshold value of the matching index from the first group.
  • the target recognition system receives more refined image information of target candidates 71c' and 71d' in the second group from multiple perspectives. Similar to step 105, step 109 provides greater details of the target candidates in the second group by capturing further image information therefrom.
  • step 110 target recognition system processes more image information of target candidates 71c' and 71d' in the second group by using methods similar to step 106. Steps 108, 109, 110 can be repeated until the target was identified, or until a user suspends the refining process and switches to the manual mode.
  • the step 111 will be activated.
  • the target recognition system transmits a second command for navigating the unmanned aerial vehicle 6 to a predetermined location via a first path 81 based on the location information.
  • a positioning module 12 shown in Fig. 8A mounted on the unmanned aerial vehicle 6 will provide the location information of the target 73 and navigate the unmanned aerial vehicle 6 from the target 73 to a predetermined location 8 via a first path 81.
  • the positioning module 12 is a navigation device, such as GPS or satellites system.
  • the first path 81 will be suggested by the positioning module 12, or combinatorially suggested by navigating applications installed on the user terminal 2 or the network data processing system 63 as shown in Fig. 11.
  • the target recognition system may receive image information of the first path 81 by the imaging module 61.
  • the unmanned aerial vehicle 6 may move along the first path 81 and capture multiple images along the first path 81 for the user to decide whether the first path 81 is an ideal path bound for the target 73 considering several factors including road construction or traffic conditions. If necessary, the target recognition system can generate other paths, such as a second path 82, bound for target 73 to provide alternative paths for the user's choice.

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

La présente invention concerne un procédé de commande d'un véhicule sans pilote pour effectuer la reconnaissance d'une cible. Le procédé consiste à recevoir une instruction pertinente pour une zone cible (102), à envoyer une première commande au véhicule sans pilote pour naviguer autour de la zone cible (102'), à recevoir des informations d'images d'une pluralité de candidats cibles à l'intérieur de la zone cible (103), à identifier un premier groupe parmi la pluralité de candidats cibles en traitant les informations d'images de la pluralité de candidats cibles (104) et à recevoir des informations d'images de chaque candidat parmi la pluralité de candidats cibles dans le premier groupe à partir d'au moins deux perspectives (105).
PCT/CN2019/091857 2019-06-19 2019-06-19 Reconnaissance de cible sur la base d'informations d'images et système de reconnaissance de cible sur la base d'informations d'images WO2020252688A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/091857 WO2020252688A1 (fr) 2019-06-19 2019-06-19 Reconnaissance de cible sur la base d'informations d'images et système de reconnaissance de cible sur la base d'informations d'images

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/091857 WO2020252688A1 (fr) 2019-06-19 2019-06-19 Reconnaissance de cible sur la base d'informations d'images et système de reconnaissance de cible sur la base d'informations d'images

Publications (1)

Publication Number Publication Date
WO2020252688A1 true WO2020252688A1 (fr) 2020-12-24

Family

ID=74037601

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/091857 WO2020252688A1 (fr) 2019-06-19 2019-06-19 Reconnaissance de cible sur la base d'informations d'images et système de reconnaissance de cible sur la base d'informations d'images

Country Status (1)

Country Link
WO (1) WO2020252688A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022232591A1 (fr) * 2021-04-29 2022-11-03 Skygrid, Llc Planification et exécution de mission multi-objectif de véhicule aérien sans pilote

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198317A (zh) * 2011-12-14 2013-07-10 韩国电子通信研究院 图像处理装置和处理图像的方法
US9164506B1 (en) * 2014-07-30 2015-10-20 SZ DJI Technology Co., Ltd Systems and methods for target tracking
CN105095451A (zh) * 2015-07-27 2015-11-25 深圳先进技术研究院 警用无人机大数据采集系统及犯罪空间数据库构建方法
CN108292141A (zh) * 2016-03-01 2018-07-17 深圳市大疆创新科技有限公司 用于目标跟踪的方法和系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198317A (zh) * 2011-12-14 2013-07-10 韩国电子通信研究院 图像处理装置和处理图像的方法
US9164506B1 (en) * 2014-07-30 2015-10-20 SZ DJI Technology Co., Ltd Systems and methods for target tracking
CN105095451A (zh) * 2015-07-27 2015-11-25 深圳先进技术研究院 警用无人机大数据采集系统及犯罪空间数据库构建方法
CN108292141A (zh) * 2016-03-01 2018-07-17 深圳市大疆创新科技有限公司 用于目标跟踪的方法和系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022232591A1 (fr) * 2021-04-29 2022-11-03 Skygrid, Llc Planification et exécution de mission multi-objectif de véhicule aérien sans pilote

Similar Documents

Publication Publication Date Title
CN105391939B (zh) 无人机拍摄控制方法和装置、无人机拍摄方法和无人机
US9479703B2 (en) Automatic object viewing methods and apparatus
US11079242B2 (en) System and method for determining autonomous vehicle location using incremental image analysis
US11924539B2 (en) Method, control apparatus and control system for remotely controlling an image capture operation of movable device
KR101634878B1 (ko) 무인 비행체의 군집 비행을 이용한 항공 영상 정합 장치 및 방법
RU2746090C2 (ru) Система и способ защиты от беспилотных летательных аппаратов в воздушном пространстве населенного пункта
RU2755603C2 (ru) Система и способ обнаружения и противодействия беспилотным летательным аппаратам
WO2019061111A1 (fr) Procédé de réglage de trajet et véhicule aérien sans pilote
Domozi et al. Real time object detection for aerial search and rescue missions for missing persons
US11644330B2 (en) Setting destinations in vehicle navigation systems based on image metadata from portable electronic devices and from captured images using zero click navigation
JP2020138681A (ja) 無人飛行体の制御システム
WO2020252688A1 (fr) Reconnaissance de cible sur la base d'informations d'images et système de reconnaissance de cible sur la base d'informations d'images
CN112945015A (zh) 一种无人机监测系统、方法、装置及存储介质
CN112422905B (zh) 一种电力设备图像采集方法、装置、设备和介质
WO2020225979A1 (fr) Dispositif de traitement d'informations, procédé de traitement d'informations, programme et système de traitement d'informations
CN109238286B (zh) 智能导航方法、装置、计算机设备及存储介质
GB2582988A (en) Object classification
CN112804441B (zh) 无人机的控制方法及装置
KR20170019108A (ko) 인식 능력 기반의 카메라 검색 방법 및 장치
CN109120843A (zh) 使相机聚焦的方法
WO2021115192A1 (fr) Dispositif de traitement d'image, procédé de traitement d'image, programme, et support d'enregistrement
WO2022000211A1 (fr) Procédé de commande de système de photographie, dispositif, plateforme mobile et support de stockage
JPWO2018123013A1 (ja) 制御装置、移動体、制御方法、及びプログラム
CN112639405A (zh) 状态信息确定方法、装置、系统、可移动平台和存储介质
US11431910B2 (en) System for controlling the zoom of a set of cameras and method of controlling a set of cameras

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19933284

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19933284

Country of ref document: EP

Kind code of ref document: A1