WO2021203036A1 - System and method for enabling efficient hardware-to-software performance - Google Patents

System and method for enabling efficient hardware-to-software performance Download PDF

Info

Publication number
WO2021203036A1
WO2021203036A1 PCT/US2021/025632 US2021025632W WO2021203036A1 WO 2021203036 A1 WO2021203036 A1 WO 2021203036A1 US 2021025632 W US2021025632 W US 2021025632W WO 2021203036 A1 WO2021203036 A1 WO 2021203036A1
Authority
WO
WIPO (PCT)
Prior art keywords
algorithm
node
logic
processing
computational
Prior art date
Application number
PCT/US2021/025632
Other languages
French (fr)
Inventor
Evandro Gurgel Do Amaral VALENTE
Dipam Naginbhai PATEL
Original Assignee
Airgility, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Airgility, Inc. filed Critical Airgility, Inc.
Publication of WO2021203036A1 publication Critical patent/WO2021203036A1/en

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/045Explanation of inference; Explainable artificial intelligence [XAI]; Interpretable artificial intelligence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]

Definitions

  • the present disclosure relates to a system and method for enabling efficient hardware-to- software performance utilized with self-contained autonomous robotics/systems that are either mobile or fixed positioned.
  • the First Industrial Revolution used water and steam power to mechanize production.
  • the Second Industrial Revolution used electric power to create mass production.
  • the Third Industrial Revolution used electronics and information technology to automate production.
  • a Fourth Industrial Revolution (current industrial revolution) is building on the Third Industrial Revolution, the digital revolution often characterized by a fusion of technologies that is blurring the lines between the physical and digital spheres and is pushing autonomy and/or automation into numerous industries or cross-sectional segments of various neighboring industries to achieve greater efficiencies in their intended processes as compared to the prior or existing process.
  • a commonly used definition of automation is a process performed without human assistance. However, the misnomer in this definition is that automation/automated is more procedure than process.
  • a “procedure” is commonly defined as “a series of actions conducted in a certain order or manner” - which is devoid of outcome metrics or success. As such, automation relates to the performance of mundane procedural actions otherwise applicable to only controlled and predictable circumstances. For example, systems configured to place labels onto soda cans, or to provide cruise control functionality to a vehicle are both procedures in which automation relieves user(s) from those specific segments of tasks.
  • autonomous capability requires satisfactory performance under uncertainties in the environment and ability to compensate for system failures without external (human/supervisor) intervention.
  • autonomous capability relies on given satisfactory performance just as the definition of the word process.
  • a “process” is commonly defined as “a series of actions or steps taken in order to achieve a particular end.”
  • autonomy utilizes the implemented artificial intelligence (AI) as a source of learned experience and makes the appropriate choices within the perceptual limitations and finite computation available.
  • AI artificial intelligence
  • the learned experiences contained in the datasets will be self-evolving via external refresh/update/augmentation of the dataset(s), internal refresh/update/augmentation of the dataset(s), entirely self-generated dataset(s), or all the above.
  • the novelty of this specification is that it provides a process/architecture that allows for scalable (that is also open- source friendly) and modularized algorithm fused autonomous deployment of the system without requiring all-autonomous capability in every instance of the system’s perception of and interaction with an environment.
  • intelligent system is commonly defined as a computationally capable system that uses techniques derived from artificial intelligence, particularly one in which such techniques are central to the operation or design of the system.
  • the central/core artificial intelligence process can be both computationally intensive and possibly a weak point (vulnerability) of the system, e.g., should it fail to perform its predictions/classifications correctly.
  • the common definition of intelligence for computational machines/hardware is a system that manages data gathering as it obtains and processes data, interprets the data, and provides reasoned judgements/decisions/insights as a basis for actions in response to sensed stimulus/anomaly.
  • U.S Pat. No. 10586464B2, U.S Pat. No. 10600295B2, U.S Pat. No. 6842674B2, W.O. Pat. No. 2019199967A1, and U.S Pat. No. 20040030571A1 each describe implementation of artificial intelligence within systems. However, they do not capture the algorithm fusion for system-wide hybridization of autonomous processes and automatic procedure to deliver a system that behaves as if it is a full-autonomy capable system even though it contains various modular blocks of code that do not include or leverage AI-based algorithms.
  • One objective of the present disclosure is to provide an algorithm fusion method and system for efficient decision-making that utilizes low computational hardware performance and requires less or no load on human decision-maker(s).
  • the present disclosure describes a system and method having autonomous and automated capabilities configured to function as a fully autonomous system, and thereby achieve the benefits of full autonomy, without the high computational cost, development cost, and subsequent hardware costs (including power consumption and heat generation) given the complexity of a truly full-autonomy system.
  • the present disclosure is designed to deliver a pseudo full-autonomy system/architecture from the perspective of the on-board processes and its implemented algorithm fusion while the user perceives the execution and receives the benefits provided by the pseudo full-autonomy system in a fashion that is sufficiently equal to or equal to an analogous system that does implement computational processes that are fully aligned with the current definition of autonomy (at its highest degree/level) that is reliant and robustly developed using artificial intelligence algorithms and their underpinning datasets used for perceiving and tracking the mission-specific requirements.
  • the present disclosure describes edge-processed algorithm fusion for robots/systems typically found in industries enabled by unmanned and manned systems dynamically adaptable for travel in aerial, terrestrial, subterranean, indoor, enclosed, irregular, blended, and marine domains, having any constant or dynamic environmental conditions, in a wide range of autonomous or semi-autonomous control regimes, or in human-machine type systems that provide a workload reduction onto the human-in-the-loop (operator) counterpart.
  • the present disclosure also describes a scalable and modular algorithm architecture that creates one or more computationally efficient processes that enables edge-processed decisionmaking that is essential to mission operability, navigation operability, or both.
  • the present disclosure also describes self-contained robots/sy stems, specifically airborne vehicles/sy stems or whose power distribution cost is primarily consumed during translation, rotation, and/or interaction with the environment, that require a balance in the cost of deployed hardware, the weight of the system, the size of the system, the power consumption of the system, the intelligent capability of the system to make self-contained decisions and/or decisions consistent with the human-in-the-loop preset needs/requirements, and the ability to balance the SWaP (size, weight, and power) requirements with artificial intelligent-based (AI-based) and traditional algorithms (non-AI-based) algorithms so the system’s behavior, perception, and/or interaction is possible without exceeding the computational limits of the computationally limited on-board hardware such as computational power consumption and/or heat typically generated by the on board computational hardware.
  • AI-based artificial intelligent-based
  • non-AI-based traditional algorithms
  • the present disclosure also describes self-contained robots/sy stems whose on board or distributed algorithm fusion drives efficiencies in the collaboration of discrete/modular or blended blocks of code responsible for the system’s robotic autonomy and automation of tasks/behavior.
  • Algorithm stack interconnectivity allows for algorithm fusion so flagged items/things/people/anomalies/etc. are selected for further scrutiny of other/subsequent collaborating algorithms.
  • Algorithm stack crosscheck allows for algorithm fusion to crosscheck data/information/commands to mitigate misreports/errors, e.g., a mop propped up against the wall may appear to be a human head with braided hair by one algorithm, but fails further crosschecks and is then disregarded. Additionally, what the algorithm “thinks” is errant during crosscheck may be logged into another database for human/operator validation. Crosscheck also enables handling of special situations, e.g., identical twins or perpetrators wearing the same mask are in view, so the algorithm fusion robustness is achieved against this type of sophisticated attack.
  • mission/use-case migration/enhancement can be implemented locally within one or more algorithm of the stack and/or by modifying the interconnectivity of data/information from one or more algorithm(s) to the next and/or changing the sequence of the algorithms in the stack and/or adding/removing algorithms of the stack.
  • Hardware/robot/drone control the algorithm fusion, with optional integration with additional sensors and/or fused sensor data/information, may control actuation partially or entirely of the system while making decisions (e.g., navigation-based decisions, mission-based prioritization decisions, etc.).
  • the present system requires less power consumption to run the processing hardware, lowers cost of deployed hardware, increases deployability/compactness (e.g., physically smaller processing hardware), lowers weight of processing hardware in payload sensitive cases, results in less heat being generated by the processing hardware, increases opportunity for processing hardware placement/integration, and reduces (to no) requirement to actively cool (e.g. fan/forced cooling medium) the processing hardware.
  • deployability/compactness e.g., physically smaller processing hardware
  • lowers weight of processing hardware in payload sensitive cases results in less heat being generated by the processing hardware
  • increases opportunity for processing hardware placement/integration increases opportunity for processing hardware placement/integration, and reduces (to no) requirement to actively cool (e.g. fan/forced cooling medium) the processing hardware.
  • Camera system sophistication match hardware with use-case, e.g., day/night vision, zoom, pan/tilt, high-definition image, etc.
  • Forensics use gesture/hand tacking algorithm of POI/suspect to document fingerprint, DNA, other bio-tracers to be collected at known locations by forensics team.
  • FIG. 1 illustrates a block diagram showing one embodiment of a simplified algorithm fusion process and architecture driving the interaction between autonomous and automation algorithms
  • FIG. 2 illustrates a block diagram showing one example of a robust algorithm fusion process and architecture driving the interaction between autonomous and automation algorithms having additional AI-based decision-making algorithms that assist in the decision-making block;
  • FIG. 3 illustrates a pictorial visualization of FIG. 1, according to one embodiment, within the context of an algorithm fusion stack comprising discretized and modularized connected blocks as totems in a totem pole;
  • FIG. 4 illustrates a pictorial visualization of FIG. 2, according to one embodiment, within the context of an algorithm fusion stack comprising discretized and modularized connected blocks as totems in a totem pole;
  • FIG. 5 illustrates a pictorial visualization of FIG. 4, according to one embodiment, within the context of an algorithm fusion stack comprising discretized and modularized connected blocks with detailed description of the AI-based and non AI-based algorithms implemented as totems in a totem pole;
  • FIG. 6 illustrates an application according to one embodiment of a process flow of algorithm fusion applied to a person of interest detection, e.g., “who ate my doughnut” example;
  • FIG. 7 illustrates an application of the updated process flow of the algorithm fusion applied to a person of interest detection for the “who ate my doughnut” example, according to one embodiment, with additional response to a triggering event that elevates probable cause, prioritization, and target tracking discrimination for a suspect (who ate the doughnut);
  • FIG. 8 illustrates an implementation of the algorithm fusion process, according to one embodiment, with actual subjects for a mission use-case of finding/tracking and documenting the “who ate my doughnut” example; and
  • FIG. 9 illustrates a schematic view of an exemplary embodiment of a computer system implementing one or more of the above-described embodiments illustrated in FIGS. 1-8.
  • a computational restrained robotic/system for example, is a robotic/system limited by one or more processing capabilities of the installed/available processor(s), e.g., CPU, GPU, TPU, etc.
  • the disclosed system utilizes the more computationally intensive process only as/if needed and otherwise passes tracking of desired/targeted events/objects/people to a generalized “blob’Vcorrelation tracker that does not place any type of classification onto the target (i.e. not AI-based).
  • This system may be referred to as an algorithm fusion system.
  • the system For example, if the system is targeting a coffee mug, once it finds the match, it no longer continues to track the mug as a mug, but instead as a blob, and is therefore no longer allocating computational resources to continuously perceiving the coffee mug as a coffee mug in which it already possesses unless the algorithm has discretized checking in time and/or space to occasionally verify that the blob it now possesses continues to indeed be a coffee mug.
  • the disclosed algorithm fusion system retains/relieves computational capacity to continue to search for other target mugs that may be or become perceptible.
  • a blob can also be referred to as an object, a portion of an object, a blotch of pixels, a pixel patch, a cluster of pixels, a blot of pixels, a spot of pixels, a mass of pixels, or any other term referring to a group of pixels of an object or portion thereof.
  • a bounding box can be associated with a blob.
  • FIG. 1 and FIG. 2 show block diagrams illustrating embodiments of algorithm fusion processes and architecture driving the interaction between autonomous and automation algorithms.
  • FIG. 1 and FIG. 2 show block diagrams illustrating embodiments of algorithm fusion processes and architecture driving the interaction between autonomous and automation algorithms.
  • other embodiments may comprise additional or alternative flows and/or architecture, or may omit certain flows or architecture altogether.
  • other embodiments may perform certain execution steps in a different order; steps may also be performed simultaneously or near-simultaneously with one another or otherwise multithreaded.
  • FIG. 1 depicts an illustrative architecture of an Algorithm Fusion System 100 in which aspects of the present disclosure may operate, hereafter also referred to simply as System 100.
  • the dotted line boxes indicate constituent processing logic streams of the System 100 itself, the solid-line boxes indicate sub-processing algorithmic or processing nodes of the System 100, the solid-line diamonds indicate sub-processing status check nodes of the System 100, the solid-line circles indicate input/out communication and command exchange nodes of the System 100, the solid-line top-slanted rectangles indicate user input nodes of the System 100, the solid-line left-pointed to circular-edge shapes indicate graphical user interface and alerting nodes of the System 100, and the arrows indicate communicative couplings between various components, subcomponents, and nodes of the System 100.
  • the Algorithm Fusion System 100 comprises three processing streams: (1) Initialization and Reset 102, (2) algorithm fused Totem Pole Stack 104, and (3) Backend and Loop 106, and comprises of two communication (data/information/command) exchange nodes: (1) Input Data/Information/Command Node 101 and (2) Output Data/Information/Command Node 103.
  • the Algorithm Fusion System 100 comprising the Initialization and Reset 102 processing logic stream is initialized by Input 101, is looped/reset by Backend and Loop 106 processing logic stream, and is recalled from the Totem Pole Stack 104 processing computational stream whereby Sensing 108 sub-processing node is initiated by the aforementioned sub-process Input 101, looped/reset by the processing logic stream Backend and Loop 106, and recalled by the processing computational stream Totem Pole Stack 104 such that a series of checks comprising Already Blob Tracking 110 status check node, Empty Database 112 status check node, and Database Match 114 status check node are performed to direct the logic flow to specific algorithm nodes comprised within the processing computational stream Totem Pole Stack 104.
  • the logic within the processing logic stream Initialization and Reset 102 reaches the sub-processing node Already Blob Tracking 110 if the answer is “no,” then the logic is passed to the processing computational stream Totem Pole Stack 104 via Identify 118 sub-process that comprises perception algorithms configured to detect, for example, the presence of people (humanoids) and high importance objects (HIO) based on the available dataset. If the answer is “yes,” then the logic cascades down to sub-processing node Database Empty 112.
  • the logic is passed to the processing computational stream Totem Pole Stack 104 via Record 126 sub-process process that in turn comprise, for example, facial/biometric and/or other feature/pattem algorithms that enable instant PIO/suspect detection.
  • the logic handling process to the Record 126 sub-process from the triggered “no” response of the Database Match 114 sub-process node is important because even though the person of interest (PIO) or HIO is/are already being tracked by the non-AI blob tracker (from a prior flag generated by the Already Blob Tracking 110 node instance) a facial/biometric and/or feature/pattem (for tattoos or other prevailing feature of the PIO/suspect) has not yet been collected and catalogued into the database (e.g., evidence database). As such, should the PIO exit the sensing view of the system then return into view without triggering the Identify 118 sub-process the individual (chain of events) has receded back into anonymity.
  • the database e.g., evidence database
  • the System 100 will recognize the subject in possession of the weapon via Identify 118 and Decide 122 sub-processes and subsequently assign the blob tracker to the downgraded subject (now downgraded to person of interest) even though the Record 126 sub-process (may have) failed to capture information for the database.
  • the PIO Since the person of interest has not yet been recorded into the database by cataloguing identifying features such as facial features, if the PIO leaves the System’s 100 viewing area and later returns without brandishing the gun, the PIO returns to anonymity with the one caveat that a picture and/or video exists of the PIO whose back was turned to the camera while brandishing the gun for investigators to review later or for additional post processing performed by other forensic artificial intelligence tools. However, if the PIO remains in view of the System 100, the logic subsequently loops via the processing logic stream Backend and Loop 106 which recalls the Initialization and Reset 102 processing logic stream whereby the Sensing 108 sub-processing node is re-initiated, and thus completing the loop.
  • Backend and Loop 106 which recalls the Initialization and Reset 102 processing logic stream whereby the Sensing 108 sub-processing node is re-initiated, and thus completing the loop.
  • the logic is passed to the GUI Alert & Remove Privacy Filter 120 sub-processing node and subsequently reaches the Decide 122 sub-processing node.
  • the GUI (graphical user interface) Alert & Remove Privacy Filter 120 node is the first indication to the user/stakeholder that the System 100 has detected sufficient probable cause to unmask the privacy filter; therefore, allowing the Decide 122 and Record 126 to gain access to the data feed whereby their subsequent analysis and decisions/actions to record the chain of events and participants takes place.
  • Another benefit in splitting the autonomous processes is that the system allows for discretization of significant real- world considerations: (1) moral/ethical implementation of privacy filtering independent of and ahead of probable cause based scrutiny and analysis thereby the System 100 contains meaningful alignment to the legal mooring of innocence until proven guilty (presumption of innocence) resulting in greater chances of positive presentation and evidence applicability towards proceedings such as in a court of law, (2) continuous operation of the System 100 even if discrete algorithmic processes fail to make an adequate detection rather than stalling the logic flow, and (3) discrete compartmentalization of AI-based algorithms such that implementation of Open Source Algorithms or improvements in proprietary algorithms are integrable or swappable resulting in adaptability and scalability in respect with, but not limited to, code-based improvements, database/trained-data based improvements, mission-based migration, hardware obsolescence, and performance matching to decrease or increase in computational hardware running the System 100.
  • Sense 108 sub-processing node of the processing logic stream Initialization and Reset 102 while bypassing the Apply Privacy Filter 116 node.
  • the return of the logic flow to the Sense 108 node without re-masking the privacy filter is important because the Decide 122 node has more than one trigger: (1) it may be called in step after the GUI Alert & Remove Privacy Filter 120 node or (2) it may be called by the Database Match 114 node.
  • the Decide 122 node is triggered by the Database Match 114 node the logic carries knowledge that the subject is an existing match in the database, for example as elevated from subject to person of interest. However, additional flags/anomalies may be perceived by the Decide 122 node that may further elevate the POI to the suspect level whereby this process is further disclosed in FIG. 2 and is further shown in FIGS. 6, 7, and 8.
  • FIG. 6 illustrates a “who ate my doughnut” example use-case.
  • a person female subject A
  • the Threat object the doughnut
  • the Decide 122 perceives the threat object (the doughnut) is in possession of the person and both records the event and elevates subject A’s status to POI, also shown in FIG. 6 and subsequently records subject A’s unique identifying features (such as facial features) to the POI database.
  • the ability to create degrees of sensitivity to suspicious and illicit activity is dependent not only on the return arrow logic flow path of either the “no” or “yes” of the Decide 122 node, but also dependent on additional perception algorithms fused/associated with the Decide 122 node as shown in FIG. 2 by inclusion of, but not limited to, Anomaly 200 and/or Posture/Pose 202 nodes.
  • additional perception algorithms fused/associated with the Decide 122 node as shown in FIG. 2 by inclusion of, but not limited to, Anomaly 200 and/or Posture/Pose 202 nodes.
  • the posture/pose of the POI in possession of the doughnut gives the System awareness that the doughnut is being eaten by the subject B; therefore, trained dataset and perception algorithms (AI- based or traditional) with close relationship to the Decide 122 node that sets degrees of prioritization, threat level, lethality, or even innocence, for example by way of identification of aggressor versus non-aggressor and so on.
  • the (threat/ target) prioritization processes and scalability of the System 100 as described in the above paragraph is further applicable to implementation of modular playbooks (terminology coined by the US Military) that defines rules (desired awareness/databases) of interest the Identify 118 node, the Decide 122 node, and the subsequent prioritization nodes such as the Anomaly 200 and the Posture/Pose 202 and so on use to achieve the automatic and autonomous mission use- case of the robotic system deployment this System’s 100 architecture.
  • the perception, prioritization, and awareness generated by the processing computational stream Totem Pole Stack 104 enables mitigation and counteraction of the System 100 since there are various GUI updates and alerts, as shown in nodes 120, 124, 132, 136, and via Output 103 node are present when/after new perception/awareness is generated.
  • the Output 103 node may operate/drive/fly a real robotic apparatus hosting or connected to the System 100 by means of issuance of command and control commands (C2) to move towards or away from objects, events, or people.
  • C2 command and control commands
  • the Identify 118 node perceives two occupants in the presence of a fire (heat source)
  • the Decide 122 associates the fire and occupants as being in danger (for example by proximity or by ambient temperature)
  • the Anomaly 200 and Posture/Pose 202 create awareness that one occupant is lying motionless while the second adult male occupant is seated and waiving at a UAV/drone carrying the edge-processing hardware running the System 100.
  • the Record 126 node recognizes that the male that is waiving to the UAV/drone is identified as a police officer given the on-board “friendlies” database.
  • the Comm./Network 140 node and the Output 103 may be configured to issue command to the C2 to get a closer inspection, while sending back an urgent support request that contains recorded capture (image/video/audio), occupancy, status/triage of victims, location, environmental condition (such as room temperature), and possibly the presence of the nearest access point from the outside or the staircase.
  • Backend and Loop 106 reassigns and tracks the PIO and/or HIO as blobs so computational resources are relieved, and the logic while passing through the User Input 138 node can (optionally) receive user inputs such as the command to land and stay monitoring the victims or carry forward, changes to the System 100 behavior to record continuous, noncontinuous, etc.
  • the System 100 proceeds autonomously to carry the intended mission as originally defined during the initialization and deployment of the System 100 by the Input 101 at the beginning of the operation.
  • the UAV/drone carrying the System 100 may continue to explore and “look” for victims until the battery dies, or autonomously return home at 40% battery discharge, or attempt to exit the structure from the nearest window, and so on as ordained by the existing Input 101 node mission profile.
  • the contextualization and intent of the disclosure of the processing computational stream entitled Totem Pole Stack has a co-dependent and communicative relationship as the major algorithmic processes annotated as (0) Tracking, (1) Identification, (2) Decision-making, and (4) Recognition are discretized and otherwise “sit” on top of one another whereby at each modular totem block a higher degree of awareness and autonomy is reached by the disclosed algorithm fusion system.
  • FIG. 4 illustrates additional decision-making computational algorithms shown provided sitting above the (2) Decision-making totem block whereby additional awareness is achievable by the Totem Pole Stack.
  • FIGS. 3, 4, and 5 illustrate the underpinning functionality and methods deployed by the embodiments as implemented within the shown totem block algorithmic processes.
  • FIG. 8 illustrates a non-limiting example of real-world deployment of the “who ate my doughnut” example use-case as utilizing the architecture as disclosed in FIG. 1 and FIG. 2.
  • the three images show what the algorithmic processes are performing at the identification, decisionmaking, and recording nodes.
  • the subject/humanoid is recognized but does not display a threat object or high interest object (HIO) and thereby has his identity concealed/masked such that even if the decision-making or record algorithms attempted to capture information, the privacy filter has masked the facial features.
  • HIO high interest object
  • a different subject holding the doughnut is identified and tracked. Since the subject is in possession of the HIO, the decision is also made to save the image and capture the distinguishing facial features of the subject who is now recorded into the database and upgraded to person of interest (POI). From here forth, should the POI succeed in leaving the viewing frame, the subject will automatically be upgraded to POI upon return even if the HOI is not displayed.
  • the camera while equipped with pan and tilt actuation, actively tracks the subject having the highest degree of interest such as a suspect rather than person of interest.
  • actuation of the system may be influenced, for example, by other perception enabled by the decision-making process such as based on threat profile (e.g., track the POI with a gun rather than the assailant displaying a knife).
  • the POI holds the HIO to his mouth.
  • This change in posture and/or distance of the HOI to his mouth causes an elevation in suspicion and thereby downgrades his status to suspect while committing his current distinguishing features to the suspect database.
  • the anomaly and pose detection stated above, for example, is equivalent in this embodiment to nodes 200 and 202 described in FIG. 2.
  • numerous different perception algorithms working/fused with the decision-making node may be deployed to raise the proper awareness/alertness that affects probable cause, mitigation/intervention, and documenting of the chain of events significant to the mission use-case.
  • the pan/tilt installed in the system used to collect the real-world events depicted in FIG. 8 could expand to other cameras systems.
  • the camera systems may communicate with each another so each follows a unique target. (This example is not limited to two cameras and/or two subjects, but could be applied to multiple cameras and multiple subjects.)
  • these vehicles may autonomously locate, track, prioritize, alert, record, mitigate, and chose to follow/track disparate assailants autonomously.
  • Another non-limiting example may include a coffee mug having a colorful design, ornamental shape, and functional attributes.
  • the coffee mug When the coffee mug is twisted, in addition to the color movement and apparent texturing, the coffee mug may resemble a closed helical shape that is wider at the base and narrower at the top, include a handle for holding and spout positioned for sipping just at the perfect angle as one elevates the coffee mug to one’s mouth. Further, a plain glass of water may be positioned next to the coffee mug.
  • the AI-based algorithms do not have to continuously run and perceive/classify and in some cases are called into use only if other triggering events do or do not call upon it.
  • relevant features of the coffee mug match the description of the desired mug, only then it is handled for further processing and its processing occur once or discretely in time as the tracking of the object is passed to the blob/pixel-correlation tracker that follows the mug but it otherwise no longer knowledgeable that it is a mug rather just an object (blob) it was told to track by prior/other fused algorithms.
  • FIG. 9 illustrates a schematic view of an exemplary embodiment of a system implementing one or more of the above-described embodiments illustrated in FIGS. 1-8.
  • the system comprises User Interface 300 and Server 500, which communicate over Network 400.
  • Embodiments implemented in computer software may be implemented in software, firmware, middleware, microcode, hardware description languages, or any combination thereof.
  • a code segment or machine-executable instructions may represent a sub-program, sub-process, subprocedure, process, logic, logic flow, procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements.
  • a code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc.
  • the functions may be stored as one or more instructions or code on a non-transitory computer-readable or processor-readable storage medium.
  • the steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module, which may reside on a computer-readable or processor-readable storage medium.
  • a non- transitory computer-readable or processor-readable media includes both computer storage media and tangible storage media that facilitate transfer of a computer program from one place to another.
  • a non-transitory processor-readable storage media may be any available media that may be accessed by a computer.
  • such non-transitory processor-readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other tangible storage medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer or processor.
  • Disk and disc include compact disc, laser disc, optical disc, digital versatile disc, floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
  • the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product.
  • the term database used herein may store a set of instructions, signal data, timestamps, and navigation messages.
  • the database implementations include, but are not limited to, a random access memory (RAM), a read only memory (ROM), a hard disk drive (HDD), a secure digital (SD) card, a magneto-resistive read/write memory, an optical read/write memory, a cache memory, or a magnetic read/write memory.
  • the database may further include one or more instructions that are executable by a processor associated with a server.
  • server used herein may be a computing device comprising a processor and non-transitory machine-readable storage capable of executing various tasks and processes described herein.
  • Non-limiting examples of the computing devices may include workstation computers, laptop computers, server computers, laptop computers, and the like.
  • the system architecture may include any number of server computing devices operating in a distributed computing environment.
  • a server may execute an algorithm to perform the algorithm fusion system and method described herein.
  • the server may train a heuristic learning algorithm model, which is configured to emulate resolution patterns or working patterns of the image or object detected corresponding to processing of the application requests of one or more previously-considered images or objects.
  • the heuristic learning algorithm model may be a machine learning data model, which may include data trees.
  • the server may receive an input of a heuristic learning algorithm dataset.
  • the heuristic learning algorithm dataset may include the profile data associated with the one or more images or objects.
  • the server may use a support vector machine with the heuristic learning algorithm dataset as an input to generate the heuristic learning algorithm model.
  • the support vector machine is a supervised learning model with associated learning algorithms that analyze the profile data used for classification and regression analysis.
  • the support vector machine training algorithm builds the heuristic learning algorithm model that assigns new data to one category or the other, making it a non-probabilistic binary linear classifier.
  • the heuristic learning algorithm model is a representation of the data as points in space.
  • the heuristic learning algorithm model may include a network of decision nodes.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Automation & Control Theory (AREA)
  • Alarm Systems (AREA)

Abstract

The present disclosure provides a process/architecture that allows for scalable and modularized algorithm fused autonomous deployment of a system without requiring all-autonomous capability in every instance of the system's perception of and interaction with an environment. This results in efficient decision-making that utilizes low computational hardware performance and requires less or no load on human decision-maker(s).

Description

SYSTEM AND METHOD FOR ENABLING EFFICIENT HARDWARE-TO-
SOFTWARE PERFORMANCE
REFERENCE TO RELATED PATENT APPLICATION The present application claims benefit of U.S. Provisional Application No. 63/004,495 filed on April 2, 2020.
TECHNICAL FIELD
The present disclosure relates to a system and method for enabling efficient hardware-to- software performance utilized with self-contained autonomous robotics/systems that are either mobile or fixed positioned.
BACKGROUND
The First Industrial Revolution used water and steam power to mechanize production. The Second Industrial Revolution used electric power to create mass production. The Third Industrial Revolution used electronics and information technology to automate production. Now a Fourth Industrial Revolution (current industrial revolution) is building on the Third Industrial Revolution, the digital revolution often characterized by a fusion of technologies that is blurring the lines between the physical and digital spheres and is pushing autonomy and/or automation into numerous industries or cross-sectional segments of various neighboring industries to achieve greater efficiencies in their intended processes as compared to the prior or existing process.
The introduction of these technological advancements, including the novel systems and processes described in this specification, continues to generate increasing demand to consume/process large quantities of data (big data) while reacting to computed insights typically at speeds and cognitive abilities that far exceed human-based ability to perceive, correlate, and interpret events in a dynamic and/or complex environment. One value driver of the current industrial revolution resides in the proper definitions of “automation” and “autonomy” such that these systems are implemented correctly. Automated and autonomous systems can be viewed as a continuum in capability and thereby added value. However, this continuum is sensitive to the law of diminishing returns whereas complexity in autonomy to drive intelligent capabilities can become detrimental to progress and the desired return on investment.
A commonly used definition of automation is a process performed without human assistance. However, the misnomer in this definition is that automation/automated is more procedure than process. A “procedure” is commonly defined as “a series of actions conducted in a certain order or manner” - which is devoid of outcome metrics or success. As such, automation relates to the performance of mundane procedural actions otherwise applicable to only controlled and predictable circumstances. For example, systems configured to place labels onto soda cans, or to provide cruise control functionality to a vehicle are both procedures in which automation relieves user(s) from those specific segments of tasks.
Whereas, autonomy (autonomous capability) requires satisfactory performance under uncertainties in the environment and ability to compensate for system failures without external (human/supervisor) intervention. In contrast, the definition for autonomous capability (autonomy) relies on given satisfactory performance just as the definition of the word process. A “process” is commonly defined as “a series of actions or steps taken in order to achieve a particular end.” As such, autonomy utilizes the implemented artificial intelligence (AI) as a source of learned experience and makes the appropriate choices within the perceptual limitations and finite computation available. Additionally, during the life of this specification and beyond, the learned experiences contained in the datasets will be self-evolving via external refresh/update/augmentation of the dataset(s), internal refresh/update/augmentation of the dataset(s), entirely self-generated dataset(s), or all the above.
As described in later sections of this specification, there are six degrees/levels (starting from zero to five) of autonomy accepted as current industry standard (such as by the Society of Automotive Engineers (SAE), the U.S. Department of Transportation’s National Highway Traffic Safety Administration (NHTSA) and others). In the highest (sixth) degree/application of autonomy, the fully autonomous system is completely non-reliant on a human operator (external intervention). This is, for example, a desired goal for driverless vehicle development that promises to deliver vehicles that go as far as not including human driver input devices such as the steering wheel or the pedals for acceleration or braking, and so on. The novelty of this specification is that it provides a process/architecture that allows for scalable (that is also open- source friendly) and modularized algorithm fused autonomous deployment of the system without requiring all-autonomous capability in every instance of the system’s perception of and interaction with an environment.
The above stated novelty can also be specified using the current industry definition for intelligent systems or (machine) intelligence. Whereas intelligent system is commonly defined as a computationally capable system that uses techniques derived from artificial intelligence, particularly one in which such techniques are central to the operation or design of the system. As a result, the central/core artificial intelligence process can be both computationally intensive and possibly a weak point (vulnerability) of the system, e.g., should it fail to perform its predictions/classifications correctly. Further, the common definition of intelligence (for computational machines/hardware) is a system that manages data gathering as it obtains and processes data, interprets the data, and provides reasoned judgements/decisions/insights as a basis for actions in response to sensed stimulus/anomaly.
While taking into consideration the prior mentioned definitions for intelligent systems and machine intelligence, the novelty of this specification, in terms of these definitions, is that just because one can design an all-intelligent system, it should not always be designed in this manner since these computational processes require additional hardware burden/cost and/or have an increased chance of a system- wide failure. The algorithm fusion system and method presented in this specification describes a novel blend of AI-based and non AI-based algorithms to achieve intelligence without requiring an “all-intelligent” system.
U.S Pat. No. 10586464B2, U.S Pat. No. 10600295B2, U.S Pat. No. 6842674B2, W.O. Pat. No. 2019199967A1, and U.S Pat. No. 20040030571A1 each describe implementation of artificial intelligence within systems. However, they do not capture the algorithm fusion for system-wide hybridization of autonomous processes and automatic procedure to deliver a system that behaves as if it is a full-autonomy capable system even though it contains various modular blocks of code that do not include or leverage AI-based algorithms.
SUMMARY
One objective of the present disclosure is to provide an algorithm fusion method and system for efficient decision-making that utilizes low computational hardware performance and requires less or no load on human decision-maker(s).
To achieve this objective, the present disclosure describes a system and method having autonomous and automated capabilities configured to function as a fully autonomous system, and thereby achieve the benefits of full autonomy, without the high computational cost, development cost, and subsequent hardware costs (including power consumption and heat generation) given the complexity of a truly full-autonomy system. For example, the present disclosure is designed to deliver a pseudo full-autonomy system/architecture from the perspective of the on-board processes and its implemented algorithm fusion while the user perceives the execution and receives the benefits provided by the pseudo full-autonomy system in a fashion that is sufficiently equal to or equal to an analogous system that does implement computational processes that are fully aligned with the current definition of autonomy (at its highest degree/level) that is reliant and robustly developed using artificial intelligence algorithms and their underpinning datasets used for perceiving and tracking the mission-specific requirements.
In addition, the present disclosure describes edge-processed algorithm fusion for robots/systems typically found in industries enabled by unmanned and manned systems dynamically adaptable for travel in aerial, terrestrial, subterranean, indoor, enclosed, irregular, blended, and marine domains, having any constant or dynamic environmental conditions, in a wide range of autonomous or semi-autonomous control regimes, or in human-machine type systems that provide a workload reduction onto the human-in-the-loop (operator) counterpart.
The present disclosure also describes a scalable and modular algorithm architecture that creates one or more computationally efficient processes that enables edge-processed decisionmaking that is essential to mission operability, navigation operability, or both.
The present disclosure also describes self-contained robots/sy stems, specifically airborne vehicles/sy stems or whose power distribution cost is primarily consumed during translation, rotation, and/or interaction with the environment, that require a balance in the cost of deployed hardware, the weight of the system, the size of the system, the power consumption of the system, the intelligent capability of the system to make self-contained decisions and/or decisions consistent with the human-in-the-loop preset needs/requirements, and the ability to balance the SWaP (size, weight, and power) requirements with artificial intelligent-based (AI-based) and traditional algorithms (non-AI-based) algorithms so the system’s behavior, perception, and/or interaction is possible without exceeding the computational limits of the computationally limited on-board hardware such as computational power consumption and/or heat typically generated by the on board computational hardware.
The present disclosure also describes self-contained robots/sy stems whose on board or distributed algorithm fusion drives efficiencies in the collaboration of discrete/modular or blended blocks of code responsible for the system’s robotic autonomy and automation of tasks/behavior.
Hardware advantages obtained through the present system and method include the following:
1) On-demand calling of algorithm/code: algorithms (AI/non-AI/mix) requiring various levels of computational resources or frame rates are summoned as needed.
2) Algorithm stack interconnectivity: algorithm interconnectivity allows for algorithm fusion so flagged items/things/people/anomalies/etc. are selected for further scrutiny of other/subsequent collaborating algorithms.
3) Algorithm stack crosscheck: allows for algorithm fusion to crosscheck data/information/commands to mitigate misreports/errors, e.g., a mop propped up against the wall may appear to be a human head with braided hair by one algorithm, but fails further crosschecks and is then disregarded. Additionally, what the algorithm “thinks” is errant during crosscheck may be logged into another database for human/operator validation. Crosscheck also enables handling of special situations, e.g., identical twins or perpetrators wearing the same mask are in view, so the algorithm fusion robustness is achieved against this type of sophisticated attack. 4) Local algorithm modification/update: mission/use-case migration/enhancement can be implemented locally within one or more algorithm of the stack and/or by modifying the interconnectivity of data/information from one or more algorithm(s) to the next and/or changing the sequence of the algorithms in the stack and/or adding/removing algorithms of the stack.
5) Hardware/robot/drone control: the algorithm fusion, with optional integration with additional sensors and/or fused sensor data/information, may control actuation partially or entirely of the system while making decisions (e.g., navigation-based decisions, mission-based prioritization decisions, etc.).
As a result, the present system requires less power consumption to run the processing hardware, lowers cost of deployed hardware, increases deployability/compactness (e.g., physically smaller processing hardware), lowers weight of processing hardware in payload sensitive cases, results in less heat being generated by the processing hardware, increases opportunity for processing hardware placement/integration, and reduces (to no) requirement to actively cool (e.g. fan/forced cooling medium) the processing hardware.
Further advantages may include:
1) Privacy and Probable Cause: the algorithms’ deployment is configurable to follow an escalation of suspicious activity, e.g., only record once flag(s) are raised and thereby does not intrude on lawful activity, e.g., adjust process flow to satisfy privacy laws.
2) Investigation: the algorithms’ deployment is configurable to capture supporting threat actors or unwilling co-conspirators.
3) Legal: The algorithms’ deployment is configurable to not characterize anybody as a person of interest or suspect, and to provide evidence management tools, e.g., seamless upload to server. Technology level advantages may include:
1) Layered tracking: Difficult to confuse multiple algorithms keeping track of the same POI/suspect.
2) Use-case pivot: Changes to flag raising events/objects creates product/system pivot to other use cases (e.g., commercial/non-military).
3) Camera system sophistication: match hardware with use-case, e.g., day/night vision, zoom, pan/tilt, high-definition image, etc.
4) Deployment: stand-alone, battery/plug-in, fixed/mobile, man-portable/robot (drone) carried; customer decides.
High-level advantages may include:
1) Forensics: use gesture/hand tacking algorithm of POI/suspect to document fingerprint, DNA, other bio-tracers to be collected at known locations by forensics team.
2) Prioritization: Use anomaly algorithm to aid in threat level quantification.
3) Damage: Use pose/posture to aid in damage triage.
Details of the present disclosure are illustrated below, including examples of various aspects of the disclosure are described below, and are used for illustrative purposes only.
BRIEF DESCRIPTION OF THE DRAWINGS
The following drawings illustrate examples of various components of embodiments of the present disclosure and are for illustrative purposes only. Embodiments of the present disclosure are illustrated by way of example and not limitation in the figures of the accompanying drawings, and in which:
FIG. 1 illustrates a block diagram showing one embodiment of a simplified algorithm fusion process and architecture driving the interaction between autonomous and automation algorithms;
FIG. 2 illustrates a block diagram showing one example of a robust algorithm fusion process and architecture driving the interaction between autonomous and automation algorithms having additional AI-based decision-making algorithms that assist in the decision-making block;
FIG. 3 illustrates a pictorial visualization of FIG. 1, according to one embodiment, within the context of an algorithm fusion stack comprising discretized and modularized connected blocks as totems in a totem pole;
FIG. 4 illustrates a pictorial visualization of FIG. 2, according to one embodiment, within the context of an algorithm fusion stack comprising discretized and modularized connected blocks as totems in a totem pole;
FIG. 5 illustrates a pictorial visualization of FIG. 4, according to one embodiment, within the context of an algorithm fusion stack comprising discretized and modularized connected blocks with detailed description of the AI-based and non AI-based algorithms implemented as totems in a totem pole;
FIG. 6 illustrates an application according to one embodiment of a process flow of algorithm fusion applied to a person of interest detection, e.g., “who ate my doughnut” example;
FIG. 7 illustrates an application of the updated process flow of the algorithm fusion applied to a person of interest detection for the “who ate my doughnut” example, according to one embodiment, with additional response to a triggering event that elevates probable cause, prioritization, and target tracking discrimination for a suspect (who ate the doughnut);
FIG. 8 illustrates an implementation of the algorithm fusion process, according to one embodiment, with actual subjects for a mission use-case of finding/tracking and documenting the “who ate my doughnut” example; and FIG. 9 illustrates a schematic view of an exemplary embodiment of a computer system implementing one or more of the above-described embodiments illustrated in FIGS. 1-8.
DETAILED DESCRIPTION
Reference will now be made to the embodiments illustrated in the drawings, and specific language will be used here to describe the same. It will nevertheless be understood that no limitation of the scope of the claims or this disclosure is thereby intended. Alterations and further modifications of the inventive features illustrated herein, and additional applications of the principles of the subject matter illustrated herein, which would occur to one ordinarily skilled in the relevant art and having possession of this disclosure, are to be considered within the scope of the subject matter disclosed herein. The present disclosure is here described in detail with reference to embodiments in the drawings, which form a part here. Other embodiments may be used and/or other changes may be made without departing from the spirit or scope of the present disclosure. The embodiments described in the detailed description are not meant to be limiting of the subject matter presented here.
The six levels (degrees) of autonomy typically accepted for robotic systems, starting from zero to five, are:
(0) - None (no autonomy)
(1) - Low (some automated assistance)
(2) - Partial Sense and Alert
(3) - Conditional Sense and Avoid
(4) - High Sense and Navigate
(5) - Full Navigation & Prioritization For a robotic system to achieve level 5, the system must deploy perceptive capabilities that are matched by adaptive capabilities to operate in a broad range of environments along with having the ability for error-tolerant decision-making. As such, computationally restrained robotics, for example typically self-contained in communication (wireless) and in power availability (untethered) or networked dispersed/distributed/federated (nodal) robotics are not able to perform the complexity of multithreaded tasks without moderate, severe or complete loss in essential functionality otherwise typically observed as reduced operating/computational speeds, frame rate reduction, refresh rate reduction, decision-making and/or response lag, degradation of confidence factor/ratio in decision-making, and so on. A computational restrained robotic/system, for example, is a robotic/system limited by one or more processing capabilities of the installed/available processor(s), e.g., CPU, GPU, TPU, etc.
To accommodate the need for complex decision-making in computational restrained/limited system(s) or network of system(s), the disclosed system utilizes the more computationally intensive process only as/if needed and otherwise passes tracking of desired/targeted events/objects/people to a generalized “blob’Vcorrelation tracker that does not place any type of classification onto the target (i.e. not AI-based). This system may be referred to as an algorithm fusion system.
For example, if the system is targeting a coffee mug, once it finds the match, it no longer continues to track the mug as a mug, but instead as a blob, and is therefore no longer allocating computational resources to continuously perceiving the coffee mug as a coffee mug in which it already possesses unless the algorithm has discretized checking in time and/or space to occasionally verify that the blob it now possesses continues to indeed be a coffee mug. By doing this, the disclosed algorithm fusion system retains/relieves computational capacity to continue to search for other target mugs that may be or become perceptible. A blob can also be referred to as an object, a portion of an object, a blotch of pixels, a pixel patch, a cluster of pixels, a blot of pixels, a spot of pixels, a mass of pixels, or any other term referring to a group of pixels of an object or portion thereof. In some examples, a bounding box can be associated with a blob.
The disclosed algorithm fusion system is best understood in the context in which it is employed. FIG. 1 and FIG. 2 show block diagrams illustrating embodiments of algorithm fusion processes and architecture driving the interaction between autonomous and automation algorithms. However, it should be appreciated that other embodiments may comprise additional or alternative flows and/or architecture, or may omit certain flows or architecture altogether. It should also be appreciated that other embodiments may perform certain execution steps in a different order; steps may also be performed simultaneously or near-simultaneously with one another or otherwise multithreaded.
FIG. 1, for example, depicts an illustrative architecture of an Algorithm Fusion System 100 in which aspects of the present disclosure may operate, hereafter also referred to simply as System 100. The dotted line boxes indicate constituent processing logic streams of the System 100 itself, the solid-line boxes indicate sub-processing algorithmic or processing nodes of the System 100, the solid-line diamonds indicate sub-processing status check nodes of the System 100, the solid-line circles indicate input/out communication and command exchange nodes of the System 100, the solid-line top-slanted rectangles indicate user input nodes of the System 100, the solid-line left-pointed to circular-edge shapes indicate graphical user interface and alerting nodes of the System 100, and the arrows indicate communicative couplings between various components, subcomponents, and nodes of the System 100.
The Algorithm Fusion System 100 comprises three processing streams: (1) Initialization and Reset 102, (2) algorithm fused Totem Pole Stack 104, and (3) Backend and Loop 106, and comprises of two communication (data/information/command) exchange nodes: (1) Input Data/Information/Command Node 101 and (2) Output Data/Information/Command Node 103.
The Algorithm Fusion System 100 comprising the Initialization and Reset 102 processing logic stream is initialized by Input 101, is looped/reset by Backend and Loop 106 processing logic stream, and is recalled from the Totem Pole Stack 104 processing computational stream whereby Sensing 108 sub-processing node is initiated by the aforementioned sub-process Input 101, looped/reset by the processing logic stream Backend and Loop 106, and recalled by the processing computational stream Totem Pole Stack 104 such that a series of checks comprising Already Blob Tracking 110 status check node, Empty Database 112 status check node, and Database Match 114 status check node are performed to direct the logic flow to specific algorithm nodes comprised within the processing computational stream Totem Pole Stack 104. The ability to control/triage the logic flow from the processing logic stream Initialization and Reset 102 onto specific algorithm nodes comprised within the processing computational stream Totem Pole Stack 104 enables the System 100 to achieve greater computational/processing efficiency since certain Al-based algorithms are bypassed depending on existing/retuming flags contained in the logic that is set along the path of the overall logic flow of the System 100.
Once the logic within the processing logic stream Initialization and Reset 102 reaches the sub-processing node Already Blob Tracking 110 if the answer is “no,” then the logic is passed to the processing computational stream Totem Pole Stack 104 via Identify 118 sub-process that comprises perception algorithms configured to detect, for example, the presence of people (humanoids) and high importance objects (HIO) based on the available dataset. If the answer is “yes,” then the logic cascades down to sub-processing node Database Empty 112. Once/if the logic within the processing logic stream Initialization and Reset 102 reaches the sub-processing node Database Empty 112 if the answer is “yes,” then the logic is passed to the processing computational stream Totem Pole Stack 104 via the Identify 118 sub-process that in turn comprise of perception algorithms that detect, for example, the presence of people (humanoids) and high importance objects (HIO) based on the available dataset; however, if the answer is “no,” then the logic cascades down to sub-processing node Database Match 114.
Once/if the logic within the processing logic stream Initialization and Reset 102 reaches the sub-processing node Database Match 114 if the answer is “yes,” then the logic is passed to the processing computational stream Totem Pole Stack 104 via Decide 122 sub-process that in turn comprises decision-making logic and perception algorithms that may further elevate the level of suspicion (and probable cause) as explained later in the detailed description of FIG. 2 and further exemplified in FIGS. 6, 7, and 8.
However, if the answer is “no” to the status check of the sub-processing node Database Match 114, then the logic is passed to the processing computational stream Totem Pole Stack 104 via Record 126 sub-process process that in turn comprise, for example, facial/biometric and/or other feature/pattem algorithms that enable instant PIO/suspect detection. The logic handling process to the Record 126 sub-process from the triggered “no” response of the Database Match 114 sub-process node is important because even though the person of interest (PIO) or HIO is/are already being tracked by the non-AI blob tracker (from a prior flag generated by the Already Blob Tracking 110 node instance) a facial/biometric and/or feature/pattem (for tattoos or other prevailing feature of the PIO/suspect) has not yet been collected and catalogued into the database (e.g., evidence database). As such, should the PIO exit the sensing view of the system then return into view without triggering the Identify 118 sub-process the individual (chain of events) has receded back into anonymity.
For example, consider a situation in which a subject is not facing the camera, but flashes a gun. The System 100 will recognize the subject in possession of the weapon via Identify 118 and Decide 122 sub-processes and subsequently assign the blob tracker to the downgraded subject (now downgraded to person of interest) even though the Record 126 sub-process (may have) failed to capture information for the database. Since the person of interest has not yet been recorded into the database by cataloguing identifying features such as facial features, if the PIO leaves the System’s 100 viewing area and later returns without brandishing the gun, the PIO returns to anonymity with the one caveat that a picture and/or video exists of the PIO whose back was turned to the camera while brandishing the gun for investigators to review later or for additional post processing performed by other forensic artificial intelligence tools. However, if the PIO remains in view of the System 100, the logic subsequently loops via the processing logic stream Backend and Loop 106 which recalls the Initialization and Reset 102 processing logic stream whereby the Sensing 108 sub-processing node is re-initiated, and thus completing the loop.
Once/if the logic within the processing computational stream Totem Pole Stack 104 reaches the sub-processing node Identity 118 if the answer is “no,” then the logic is passed to Apply Privacy Filter 116 sub-processing node whereby privacy measures are placed to concern the rights and freedoms of the viewable individuals and their subsequent lawful affairs such that logic is returned the Sense 108 starting node of the processing logic stream Initialization and Reset 102. The logic flow described in this paragraph is important as it allows for a computationally efficient path of analyzing the viewable sight picture without any use of further classification, recognition, and recording of any kind. For example, if five people are in view of the System 100 and none are displaying a HIO or raising any other object of interest related alerts in the Identify 118 node, then the entire looping logic path resides in the top right comer of the logic path in FIG. 1 and in FIG. 2.
However, if the answer is “yes” to the sub-processing node Identity 118, then the logic is passed to the GUI Alert & Remove Privacy Filter 120 sub-processing node and subsequently reaches the Decide 122 sub-processing node. The GUI (graphical user interface) Alert & Remove Privacy Filter 120 node is the first indication to the user/stakeholder that the System 100 has detected sufficient probable cause to unmask the privacy filter; therefore, allowing the Decide 122 and Record 126 to gain access to the data feed whereby their subsequent analysis and decisions/actions to record the chain of events and participants takes place. Another benefit in splitting the autonomous processes is that the system allows for discretization of significant real- world considerations: (1) moral/ethical implementation of privacy filtering independent of and ahead of probable cause based scrutiny and analysis thereby the System 100 contains meaningful alignment to the legal mooring of innocence until proven guilty (presumption of innocence) resulting in greater chances of positive presentation and evidence applicability towards proceedings such as in a court of law, (2) continuous operation of the System 100 even if discrete algorithmic processes fail to make an adequate detection rather than stalling the logic flow, and (3) discrete compartmentalization of AI-based algorithms such that implementation of Open Source Algorithms or improvements in proprietary algorithms are integrable or swappable resulting in adaptability and scalability in respect with, but not limited to, code-based improvements, database/trained-data based improvements, mission-based migration, hardware obsolescence, and performance matching to decrease or increase in computational hardware running the System 100.
Once/if the logic within the processing computational stream Totem Pole Stack 104 reaches the sub-processing node Decide 122, if the answer is “no,” then the logic is returned to the
Sense 108 sub-processing node of the processing logic stream Initialization and Reset 102 while bypassing the Apply Privacy Filter 116 node. The return of the logic flow to the Sense 108 node without re-masking the privacy filter is important because the Decide 122 node has more than one trigger: (1) it may be called in step after the GUI Alert & Remove Privacy Filter 120 node or (2) it may be called by the Database Match 114 node. As described earlier, when the Decide 122 node is triggered by the Database Match 114 node the logic carries knowledge that the subject is an existing match in the database, for example as elevated from subject to person of interest. However, additional flags/anomalies may be perceived by the Decide 122 node that may further elevate the POI to the suspect level whereby this process is further disclosed in FIG. 2 and is further shown in FIGS. 6, 7, and 8.
FIG. 6 illustrates a “who ate my doughnut” example use-case. In this example, a person (female subject A) is in view of the System 100 and displays the threat object (the doughnut) as whereby the Identify 118 node perceives the presence of the person and the threat object. In this example, the Decide 122 perceives the threat object (the doughnut) is in possession of the person and both records the event and elevates subject A’s status to POI, also shown in FIG. 6 and subsequently records subject A’s unique identifying features (such as facial features) to the POI database. However, possession of the doughnut alone is not a crime, e.g., does not indicate clear intent to eat the doughnut (crime) or proof that subject A is the person who ate the doughnut (committed the crime). Instead, if subject a gives the doughnut to fellow police officer (male subject B) as illustrated in FIG. 7, then subject B is added to the POI database and documented by the Decide 122 node. But, in his case, subject B eats the doughnut and is allocated/elevated to the suspect database while having documentation of the event automatically recorded by the System 100, also illustrated in FIG. 7. However, the ability to create degrees of sensitivity to suspicious and illicit activity is dependent not only on the return arrow logic flow path of either the “no” or “yes” of the Decide 122 node, but also dependent on additional perception algorithms fused/associated with the Decide 122 node as shown in FIG. 2 by inclusion of, but not limited to, Anomaly 200 and/or Posture/Pose 202 nodes. For the System 100 to sense the additional actions which elevates the POI to suspect, in the use-case as illustrated in FIGS. 6, 7, and 8, the posture/pose of the POI in possession of the doughnut gives the System awareness that the doughnut is being eaten by the subject B; therefore, trained dataset and perception algorithms (AI- based or traditional) with close relationship to the Decide 122 node that sets degrees of prioritization, threat level, lethality, or even innocence, for example by way of identification of aggressor versus non-aggressor and so on.
The (threat/ target) prioritization processes and scalability of the System 100 as described in the above paragraph is further applicable to implementation of modular playbooks (terminology coined by the US Military) that defines rules (desired awareness/databases) of interest the Identify 118 node, the Decide 122 node, and the subsequent prioritization nodes such as the Anomaly 200 and the Posture/Pose 202 and so on use to achieve the automatic and autonomous mission use- case of the robotic system deployment this System’s 100 architecture. In addition, the perception, prioritization, and awareness generated by the processing computational stream Totem Pole Stack 104 enables mitigation and counteraction of the System 100 since there are various GUI updates and alerts, as shown in nodes 120, 124, 132, 136, and via Output 103 node are present when/after new perception/awareness is generated.
Further, the Output 103 node may operate/drive/fly a real robotic apparatus hosting or connected to the System 100 by means of issuance of command and control commands (C2) to move towards or away from objects, events, or people. For example, the Identify 118 node perceives two occupants in the presence of a fire (heat source), the Decide 122 associates the fire and occupants as being in danger (for example by proximity or by ambient temperature), the Anomaly 200 and Posture/Pose 202 create awareness that one occupant is lying motionless while the second adult male occupant is seated and waiving at a UAV/drone carrying the edge-processing hardware running the System 100. Additionally, the Record 126 node recognizes that the male that is waiving to the UAV/drone is identified as a police officer given the on-board “friendlies” database. In this scenario, the Comm./Network 140 node and the Output 103 may be configured to issue command to the C2 to get a closer inspection, while sending back an urgent support request that contains recorded capture (image/video/audio), occupancy, status/triage of victims, location, environmental condition (such as room temperature), and possibly the presence of the nearest access point from the outside or the staircase. Furthermore, the logic as it flows into the processing logic stream Backend and Loop 106 reassigns and tracks the PIO and/or HIO as blobs so computational resources are relieved, and the logic while passing through the User Input 138 node can (optionally) receive user inputs such as the command to land and stay monitoring the victims or carry forward, changes to the System 100 behavior to record continuous, noncontinuous, etc. In contrast, if no User Input 138 is provided, then the System 100 proceeds autonomously to carry the intended mission as originally defined during the initialization and deployment of the System 100 by the Input 101 at the beginning of the operation. In this example, the UAV/drone carrying the System 100 may continue to explore and “look” for victims until the battery dies, or autonomously return home at 40% battery discharge, or attempt to exit the structure from the nearest window, and so on as ordained by the existing Input 101 node mission profile.
As shown in FIG. 3, while looping and interconnected structure are removed, the contextualization and intent of the disclosure of the processing computational stream entitled Totem Pole Stack has a co-dependent and communicative relationship as the major algorithmic processes annotated as (0) Tracking, (1) Identification, (2) Decision-making, and (4) Recognition are discretized and otherwise “sit” on top of one another whereby at each modular totem block a higher degree of awareness and autonomy is reached by the disclosed algorithm fusion system.
FIG. 4 illustrates additional decision-making computational algorithms shown provided sitting above the (2) Decision-making totem block whereby additional awareness is achievable by the Totem Pole Stack.
FIGS. 3, 4, and 5 illustrate the underpinning functionality and methods deployed by the embodiments as implemented within the shown totem block algorithmic processes.
FIG. 8 illustrates a non-limiting example of real-world deployment of the “who ate my doughnut” example use-case as utilizing the architecture as disclosed in FIG. 1 and FIG. 2. The three images show what the algorithmic processes are performing at the identification, decisionmaking, and recording nodes.
In the top row of images, for example, the subject/humanoid is recognized but does not display a threat object or high interest object (HIO) and thereby has his identity concealed/masked such that even if the decision-making or record algorithms attempted to capture information, the privacy filter has masked the facial features.
In the middle row of images, for example, a different subject holding the doughnut (HIO) is identified and tracked. Since the subject is in possession of the HIO, the decision is also made to save the image and capture the distinguishing facial features of the subject who is now recorded into the database and upgraded to person of interest (POI). From here forth, should the POI succeed in leaving the viewing frame, the subject will automatically be upgraded to POI upon return even if the HOI is not displayed. Moreover, the camera, while equipped with pan and tilt actuation, actively tracks the subject having the highest degree of interest such as a suspect rather than person of interest. Likewise, actuation of the system may be influenced, for example, by other perception enabled by the decision-making process such as based on threat profile (e.g., track the POI with a gun rather than the assailant displaying a knife).
In the bottom row of images, for example, the POI holds the HIO to his mouth. This change in posture and/or distance of the HOI to his mouth causes an elevation in suspicion and thereby downgrades his status to suspect while committing his current distinguishing features to the suspect database. The anomaly and pose detection stated above, for example, is equivalent in this embodiment to nodes 200 and 202 described in FIG. 2. However, numerous different perception algorithms working/fused with the decision-making node may be deployed to raise the proper awareness/alertness that affects probable cause, mitigation/intervention, and documenting of the chain of events significant to the mission use-case. Further, the pan/tilt installed in the system used to collect the real-world events depicted in FIG. 8 could expand to other cameras systems. For example, if two cameras were deployed and two individuals were present. While one is a POI and the other is a suspect, the camera systems may communicate with each another so each follows a unique target. (This example is not limited to two cameras and/or two subjects, but could be applied to multiple cameras and multiple subjects.) By extension, if the same were to happen where drones are deployed that have the disclosed architecture, these vehicles may autonomously locate, track, prioritize, alert, record, mitigate, and chose to follow/track disparate assailants autonomously.
Another non-limiting example may include a coffee mug having a colorful design, ornamental shape, and functional attributes. When the coffee mug is twisted, in addition to the color movement and apparent texturing, the coffee mug may resemble a closed helical shape that is wider at the base and narrower at the top, include a handle for holding and spout positioned for sipping just at the perfect angle as one elevates the coffee mug to one’s mouth. Further, a plain glass of water may be positioned next to the coffee mug. As physically/mentally healthy humans who enjoy fully autonomous capability (level 5) and have vast libraries/memories (cognitive abilities) to near instantly correlate objects, actions, and places; therefore, the mug and the plain glass of water are easily distinguishable, but at a computational cost to our natural cognitive systems, e.g., the colors, texture, shape, functionality, and the classifications are continuously running in our brain. While this is not commonly believed to be necessarily wasteful for the human brain to continuously process as it (the brain) enjoys the benefits of its electro-chemical function/messages, it does have limitations nonetheless since sensory overflow may prohibit more intelligent thought processes.
In this example, if the coffee mug and the glass of water are sensed with a fully autonomous system, then various internal databases are needed to continuously sense, track, store, and perceive both objects along with all of their individual attributes, which comes at a monetary and computational cost, etc., (e.g., what benefit does the machine continuously receive or offer as an output once it has already understood and documented the relevant features in the first place and at what computational cost?)
Further, if the human subject is presented with the same coffee mug (ornate version described above) and other versions of the coffee mug having fewer and fewer features, then the other version would each require less and less brain power. Ultimately, the perception of a plain mug is that of just a mug rather than having the additional artisanal craft and therefore requires less brain power. An analogous experiment applied to an algorithm fused system, as described in this specification, only uses AI-based algorithms in “needed sectors” of perception as coded by the programmer but additionally passes the tracking of the object to a completely non AI-based tracker that has no contextual understanding of what it is tracking since it tracks only a generic shape (“blob-ness”) of things. Thus, the AI-based algorithms do not have to continuously run and perceive/classify and in some cases are called into use only if other triggering events do or do not call upon it. In this example, if relevant features of the coffee mug match the description of the desired mug, only then it is handled for further processing and its processing occur once or discretely in time as the tracking of the object is passed to the blob/pixel-correlation tracker that follows the mug but it otherwise no longer knowledgeable that it is a mug rather just an object (blob) it was told to track by prior/other fused algorithms.
To achieve lower brain power use in the human brain some sort of sensory deprivation is required since, for example, just the recognition of colors spark additional synapses. To achieve a similar reduction in computational load in machines, for example, the tracking of objects in time and space needs to be similarly “sensory deprived” of its classifications since the machine needs to only to make initial or discrete detections to momentarily achieve the complex perceptive decisions, unlike the perspective continuum afforded to the human brain during either conscious activity and dormancy.
FIG. 9 illustrates a schematic view of an exemplary embodiment of a system implementing one or more of the above-described embodiments illustrated in FIGS. 1-8. In this embodiment, the system comprises User Interface 300 and Server 500, which communicate over Network 400.
The various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
Embodiments implemented in computer software may be implemented in software, firmware, middleware, microcode, hardware description languages, or any combination thereof. A code segment or machine-executable instructions may represent a sub-program, sub-process, subprocedure, process, logic, logic flow, procedure, a function, a subprogram, a program, a routine, a subroutine, a module, a software package, a class, or any combination of instructions, data structures, or program statements. A code segment may be coupled to another code segment or a hardware circuit by passing and/or receiving information, data, arguments, parameters, or memory contents. Information, arguments, parameters, data, etc. may be passed, forwarded, or transmitted via any suitable means including memory sharing, message passing, token passing, network transmission, etc.
The actual software code or specialized control hardware used to implement these systems and methods is not limiting of the methods and embodiments described herein. Thus, the operation and behavior of the systems and methods were described without reference to the specific software code being understood that software and control hardware can be designed to implement the systems and methods based on the description herein. When implemented in software, the functions may be stored as one or more instructions or code on a non-transitory computer-readable or processor-readable storage medium. The steps of a method or algorithm disclosed herein may be embodied in a processor-executable software module, which may reside on a computer-readable or processor-readable storage medium. A non- transitory computer-readable or processor-readable media includes both computer storage media and tangible storage media that facilitate transfer of a computer program from one place to another. A non-transitory processor-readable storage media may be any available media that may be accessed by a computer. By way of example, and not limitation, such non-transitory processor- readable media may comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other tangible storage medium that may be used to store desired program code in the form of instructions or data structures and that may be accessed by a computer or processor. Disk and disc, as used herein, include compact disc, laser disc, optical disc, digital versatile disc, floppy disk, and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media. Additionally, the operations of a method or algorithm may reside as one or any combination or set of codes and/or instructions on a non-transitory processor-readable medium and/or computer-readable medium, which may be incorporated into a computer program product.
It is understood that the term database used herein may store a set of instructions, signal data, timestamps, and navigation messages. The database implementations include, but are not limited to, a random access memory (RAM), a read only memory (ROM), a hard disk drive (HDD), a secure digital (SD) card, a magneto-resistive read/write memory, an optical read/write memory, a cache memory, or a magnetic read/write memory. The database may further include one or more instructions that are executable by a processor associated with a server.
It is understood that the term server used herein may be a computing device comprising a processor and non-transitory machine-readable storage capable of executing various tasks and processes described herein. Non-limiting examples of the computing devices may include workstation computers, laptop computers, server computers, laptop computers, and the like. The system architecture may include any number of server computing devices operating in a distributed computing environment.
In accordance with the foregoing, a server may execute an algorithm to perform the algorithm fusion system and method described herein. The server may train a heuristic learning algorithm model, which is configured to emulate resolution patterns or working patterns of the image or object detected corresponding to processing of the application requests of one or more previously-considered images or objects. The heuristic learning algorithm model may be a machine learning data model, which may include data trees. During the training process of the heuristic learning algorithm model, the server may receive an input of a heuristic learning algorithm dataset. The heuristic learning algorithm dataset may include the profile data associated with the one or more images or objects. The server may use a support vector machine with the heuristic learning algorithm dataset as an input to generate the heuristic learning algorithm model. The support vector machine is a supervised learning model with associated learning algorithms that analyze the profile data used for classification and regression analysis. The support vector machine training algorithm builds the heuristic learning algorithm model that assigns new data to one category or the other, making it a non-probabilistic binary linear classifier. The heuristic learning algorithm model is a representation of the data as points in space. The heuristic learning algorithm model may include a network of decision nodes. The preceding description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present subject matter. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the subject matter. Thus, the present subject matter is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the following claims and the principles and novel features disclosed herein.
While various aspects and embodiments have been disclosed, other aspects and embodiments are contemplated. The various aspects and embodiments disclosed are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.

Claims

CLAIMS We claim:
Claim 1. An algorithm fused system, comprising: one or more processors configured to perform artificial intelligence computational processes designed to collaboratively enable autonomy and automation functionality to a robotic system utilizing the algorithm fused system, wherein said system employs artificial intelligence based and traditional based algorithms that comprise adaptive functions within the identification, decision-making, prioritization, mitigation, and recording processes of the algorithm fused system, and said system reclaims computational resources of the one or more processors by switching from an artificial intelligence-based tracking algorithm to a non-artificial intelligence tracking algorithm.
PCT/US2021/025632 2020-04-02 2021-04-02 System and method for enabling efficient hardware-to-software performance WO2021203036A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063004495P 2020-04-02 2020-04-02
US63/004,495 2020-04-02

Publications (1)

Publication Number Publication Date
WO2021203036A1 true WO2021203036A1 (en) 2021-10-07

Family

ID=77929678

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/025632 WO2021203036A1 (en) 2020-04-02 2021-04-02 System and method for enabling efficient hardware-to-software performance

Country Status (1)

Country Link
WO (1) WO2021203036A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050183569A1 (en) * 2002-04-22 2005-08-25 Neal Solomon System, methods and apparatus for managing a weapon system
US20180028811A1 (en) * 2016-08-01 2018-02-01 Peter Bart Jos Van Gerwen Intelligent modularization
US20190360717A1 (en) * 2019-07-04 2019-11-28 Lg Electronics Inc. Artificial intelligence device capable of automatically checking ventilation situation and method of operating the same
US20200005116A1 (en) * 2018-06-27 2020-01-02 Sony Corporation Artificial intelligence-enabled device for network connectivity independent delivery of consumable information

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050183569A1 (en) * 2002-04-22 2005-08-25 Neal Solomon System, methods and apparatus for managing a weapon system
US20180028811A1 (en) * 2016-08-01 2018-02-01 Peter Bart Jos Van Gerwen Intelligent modularization
US20200005116A1 (en) * 2018-06-27 2020-01-02 Sony Corporation Artificial intelligence-enabled device for network connectivity independent delivery of consumable information
US20190360717A1 (en) * 2019-07-04 2019-11-28 Lg Electronics Inc. Artificial intelligence device capable of automatically checking ventilation situation and method of operating the same

Similar Documents

Publication Publication Date Title
Yuan et al. Aerial images-based forest fire detection for firefighting using optical remote sensing techniques and unmanned aerial vehicles
US20220157136A1 (en) Facility surveillance systems and methods
Kim et al. Feature selection for intelligent firefighting robot classification of fire, smoke, and thermal reflections using thermal infrared images
US20200356774A1 (en) Systems and methods for aerostat management including identifying, classifying and determining predictive trends of an entity of interest
Matthew et al. Artificial intelligence autonomous unmanned aerial vehicle (UAV) system for remote sensing in security surveillance
Pinto et al. Case-based reasoning approach applied to surveillance system using an autonomous unmanned aerial vehicle
Iqbal et al. Real‐Time Surveillance Using Deep Learning
Thakur et al. Artificial intelligence techniques in smart cities surveillance using UAVs: A survey
Dilshad et al. Efficient Deep Learning Framework for Fire Detection in Complex Surveillance Environment.
Aibin et al. Survey of RPAS autonomous control systems using artificial intelligence
Prasad et al. Artificial Intelligence Based Fire and Smoke Detection and Security Control System
Rehman et al. Internet‐of‐Things‐Based Suspicious Activity Recognition Using Multimodalities of Computer Vision for Smart City Security
Julius Fusic et al. Scene terrain classification for autonomous vehicle navigation based on semantic segmentation method
Simpson Real-time drone surveillance system for violent crowd behavior unmanned aircraft system (uas)–human autonomy teaming (hat)
Bulut et al. Efficient path planning of drone swarms over clustered human crowds in social events
US20230409054A1 (en) Unmanned aerial vehicle event response system and method
WO2021203036A1 (en) System and method for enabling efficient hardware-to-software performance
CN107225571A (en) Motion planning and robot control method and apparatus, robot
US20210097351A1 (en) Adaptive artificial intelligence system for event categorizing by switching between different states
Kalyanam et al. Optimal human–machine teaming for a sequential inspection operation
Brown Making sense of the noise: an ABC approach to big data and security
Ithnin et al. Intelligent locking system using deep learning for autonomous vehicle in internet of things
Hummel et al. Intelligent multi sensor fusion system for advanced situation awareness in urban environments
Hu et al. Deep-learned pedestrian avoidance policy for robot navigation
Yuan Automatic Fire Detection Using Computer Vision Techniques for UAV-based Forest Fire Surveillance

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21780083

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21780083

Country of ref document: EP

Kind code of ref document: A1