WO2023144780A1 - Crowdsourcing techniques to deploy artificial intelligence systems - Google Patents

Crowdsourcing techniques to deploy artificial intelligence systems Download PDF

Info

Publication number
WO2023144780A1
WO2023144780A1 PCT/IB2023/050753 IB2023050753W WO2023144780A1 WO 2023144780 A1 WO2023144780 A1 WO 2023144780A1 IB 2023050753 W IB2023050753 W IB 2023050753W WO 2023144780 A1 WO2023144780 A1 WO 2023144780A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
models
objects
interest
network
Prior art date
Application number
PCT/IB2023/050753
Other languages
French (fr)
Inventor
Jayant RATTI
Abhishek TANDON
Anubhav Rohatgi
Tushar Maurya
Sanchit SAMUEL
Yash Shah
Anuj MIDDHA
Himanshu Gupta
Original Assignee
Ratti Jayant
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ratti Jayant filed Critical Ratti Jayant
Publication of WO2023144780A1 publication Critical patent/WO2023144780A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0495Quantised networks; Sparse networks; Compressed networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/096Transfer learning

Definitions

  • the invention relates to a system and method of edge-computing for AI deployment and Back-Propagation (BP) or Last in First Out (LIFO) technique for improving AI accuracy, in a crowdsourced AI stewardship network.
  • BP Back-Propagation
  • LIFO Last in First Out
  • US9390086B2 filed by Palantir Technologies Inc., describes a technique for a classification system with methodology for enhanced verification. In one approach, a classification computer trains a classifier based on a set of training documents.
  • the classification computer After training is complete, the classification computer iterates over a collection of unlabeled documents uses the trained classifier to predict a label for each unlabeled document.
  • a verification computer retrieves one of the documents assigned a label by the classification computer. The verification computer then generates a user interface that displays select information from the document and provides an option to verify the label predicted by the classification computer or provide an alternative label. The document and the verified label are then fed back into the set of training documents and are used to retrain the classifier to improve subsequent classifications.
  • the document is indexed by a query computer based on the verified label and made available for search and display.
  • United States Patent Publication US10382300B2 filed by Evolv Technologies Inc. describes a platform for gathering real-time analysis, wherein the Data is received characterizing a request for agent computation of sensor data. The request includes required confidence and required latency for completion of the agent computation. Agents to the query are determined based on the required confidence. Data is transmitted to query the determined agents to provide analysis of the sensor data. Related apparatus, systems, techniques, and articles are also described.
  • United States Patent application US20190171950A1 filed by Ai Company Inc. describes a method and a method and a system for auto learning, artificial intelligence (AI) applications development, and execution.
  • AI artificial intelligence
  • Various applications or operations may be associated with training environment-agnostic AI models, automated AI app application performance monitoring, fault, quality and performance remediation through prediction of failures or suboptimal performance, privacy and secure AI training and inference mechanism for data and AI model sharing between untrusted parties, and building auto learning applications that can automatically learn and improve.
  • United States patent application US20210342745A1 filed by Clarifai Inc. describes a service platform that facilitates artificial intelligence model and data collection and collection may be provided. Input/output information derived from machine learning models may be obtained via the service platform.
  • the input/output information may indicate (i) first items provided as input to at least one model of the machine learning models, (ii) first prediction outputs derived from the at least one model's processing of the first items, (iii) second items provided as input to at least another model of the machine learning models, (iv) second prediction outputs derived from the at least one other model's processing of the second items, and (v) other inputs and outputs.
  • the input/output information may be provided via the service platform to update a first machine learning model.
  • the first machine learning model may be updated based on the input/output information being provided as input to the first machine learning model.
  • United States Patent application US20190080461A9 filed by A9 com Inc., describes improvements to audio/video (A/V) recording and communication devices, including improved approaches to using a neighborhood alert mode for triggering multi-device recording, to a multi- camera motion tracking process, and to a multi-camera event stitching process to create a series of “storyboard” images for the activity taking place across the fields of view of multiple cameras, within a predetermined time period, for the A/V recording and communication devices.
  • United States patent publication US9183580B2 filed by Digimarc Corp, describes methods and arrangements involving portable devices. One arrangement enables a content creator to select software with which that content should be rendered—assuring continuity between artistic intention and delivery.
  • Another arrangement utilizes the camera of a smartphone to identify nearby subjects, and take actions based thereon.
  • Others rely on near field chip (RFID) identification of objects, or on identification of audio streams (e.g., music, voice).
  • RFID near field chip
  • Some of the detailed technologies concern improvements to the user interfaces associated with such devices. Others involve use of these devices in connection with shopping, text entry, sign language interpretation, and vision-based discovery. Still other improvements are architectural in nature, e.g., relating to evidence-based state machines, and blackboard systems. Yet other technologies concern use of linked data in portable devices—some of which exploit GPU capabilities. Still other technologies concern computational photography. A great variety of other features and arrangements are also detailed. [0010] United States Patent publication US10559202B2, filed by Intel Corp, describes an apparatus comprises a memory and a processor.
  • the memory is to store sensor data captured by one or more sensors associated with a first device.
  • the processor comprises circuitry to: access the sensor data captured by the one or more sensors associated with the first device; determine that an incident occurred within a vicinity of the first device; identify a first collection of sensor data associated with the incident, wherein the first collection of sensor data is identified from the sensor data captured by the one or more sensors; preserve, on the memory, the first collection of sensor data associated with the incident; and notify one or more second devices of the incident, wherein the one or more second devices are located within the vicinity of the first device.
  • An object and/or event of interest is the data information relevant to the problem vertical, wherein the problem vertical address infrastructure damage such as road conditions, traffic moving violations, brand knowledge assessment, availability of utilities such as gas and EV stations, and more, where this data information is utilized by the end customers.
  • Society has been using manual observation on the field to capture these dynamic events and/or objects of interest, but addressing the massive scale of modern global cities is an enormous challenge.
  • Artificial Intelligence software systems have been expanding into this domain by automating the detection of dynamic events and/or objects of interest and reducing the huge expense associated with a completely manual process.
  • the detection of dynamic events and/or objects of interest(s) can be performed by using analysis by one or more AI models in different parallel and sequential combinations, where, at each step, a part of the target information is localized and is sent to the next step for further processing. For example, for detecting whether a driver is wearing a seatbelt or not, we can break down the problem by first detecting vehicles, then persons inside those vehicles, and then finally understanding whether the person is wearing a seatbelt or not. This way of localizing helps achieve the task and determine the dynamic events and/or objects of interest. [0014] However, AI models are not fully accurate, with false positive and false negative, creeping into the AI detections.
  • Machine Learning and AI methods for performing tasks such as object detection, object classification and segmentation aren ⁇ t always perfectly accurate. In a production scenario certain cases might arise when the data fed to the system for inference might follow a distribution which doesn ⁇ t provide perfect output. In such cases it is not affordable to give our wrong information which can be critical.
  • Production scenarios can be generating vehicle violation information, monitoring infrastructure condition such as potholes on roads and relaying notifications, traffic monitoring and ground control etc. which are extremely critical and can face plausible errors in different settings. Hence it is extremely important that an evaluation network is present in order to make sure that the data flowing out of the system is not erroneous.
  • an evaluation network is present in order to make sure that the data flowing out of the system is not erroneous.
  • the objective of the present invention is to an efficient, effective and reliable autonomous crowdsourced on-demand AI powered stewardship network that incorporates the capabilities of LIFO or back-propagation in the AI networks for correcting the inaccuracies based on neural networks for underlying an initial set of results, based on which further annotations are performed.
  • the present disclosure relates to artificial intelligence (AI) and neural networks. More specifically relates to a system and method of LIFO or backward propagation (back-propagation) in an autonomous hybrid crowdsourced autonomous crowdsourced on-demand AI powered stewardship network.
  • An aspect of the present disclosure relates to artificial intelligence (AI).
  • the present disclosure pertains to an artificial intelligence based system and method for detecting of relevant dynamic events and/or objects of interest, and provide a workflow/pipeline to the specialists and auditors to provide annotations and train the neural network based on such annotations accordingly.
  • an electronic device configured with one or more artificial intelligence (AI) algorithms and/or machine learning (ML) algorithms.
  • the system includes a non-transitory storage having embodied therein one or more routines and one or more processors coupled with a non-transitory storage device and is operable to execute the one or more modules.
  • a scalable network system configured with one or more artificial intelligence (AI) algorithms and/or machine learning (ML) algorithms.
  • the system includes one or more processors coupled to a storage device and operable to execute the one or more modules.
  • the one or more routines includes an obtaining module, which when executed by the one or more processors, obtains, based on the one or more workflows/pipelines for problem verticals, wherein a workflow/pipeline comprises of worksteps, where the work step is the smallest step using an AI and/or ML algorithm, at least a combination of dynamic events and/or objects of interest from the one or more cameras as explained above, wherein said the combination of dynamic events and/or object of interest comprises at least an image having one or more identifiable objects, and a transmit module, which when executed by the one or more processors, transits the obtained combination the obtained dynamic events and/or objects of interest to the one or more servers and a receiving module, which receives the continuously updated the dynamic events and/or object of interest from the electronic mobile device remotely.
  • An electronic mobile device which displays the data received from the network and allows specialists or auditors to review the data and correct any wrong data, wherein the electronic mobile device comprising one or more tools to perform one or more of operations using AI to annotate, draw, label and classify the received one or more dynamic events and/or objects of interest and transmit the annotated, drawn, labeled and/or classified objects, areas, dynamic events and/or objects of interest back to the network system for deep learning, re- training and AI inference improvement operations.
  • the annotation data from AI models are stored in a memory as an intermediate output, wherein the intermediate output is retrieved and sent for verification to specialists or auditors and saved back to the memory.
  • output from AI models closest to the event identification/faults/ problem is sent first for verification using bottom-up propagation in a backward paradigm in accordance with the requirements of the system.
  • the AI models are neural network models used for tasks involving segmentation, classification, character recognition, object detection, and identification. The improvement of the AI models including segmentation-based AI model is performed by annotation by a specialist or auditor on every incorrect result by the AI model, thereby allowing retraining of the AI models.
  • a technique of LIFO / backpropagation is developed to efficiently collect data for training underlying AI models in the workflows/pipelines while ensuring minimization of costs inferred using human intelligence to validate the AI responses and enable improving AI systems in the shortest time, without the need to replace the AI model with a new AI model.
  • the validation of AI detections happens from the last step of the workflow/pipeline back to the first.
  • the data is sent in a forward cycle through the workflow/pipeline and then later on in a backward cycle.
  • the valid AI detections from the last step result in dynamic events and/or objects of interest and enable using AI systems in real-world scenarios.
  • the LIFO/backpropagation technique uses crowdsourced techniques to vastly improve the accuracy of AI system.
  • the backward validation cycle through the specialist workforce ensures that the step producing the incorrect detections is identified.
  • the labeling of the data point through the crowdsourced annotation network then provides the data point with the correct label for future training of that model.
  • This technique is thus similar to the method of training neural network models where the gradient of the network ⁇ s response with the ground truth is propagated through the last layer to the first layer. In this way, The LIFO/backpropagation method technique developed can scale the training to not only one neural network but a host of models comprising the entire pipeline/workflow and multiple pipeline/workflows running simultaneously.
  • the objective of this disclosure is to provide a method in a distributed network comprising one or more artificial intelligence (AI) algorithms, the network comprising a scalable infrastructure to execute one or more problem verticals by executing workflows/pipelines and worksteps that are included in the problem verticals.
  • AI artificial intelligence
  • One or more problem verticals define a problem to be solved or can be based on an industry type.
  • the worksteps define the sub-problems in the problem vertical.
  • the method incorporates the capabilities of neural networks as well as scalability for its operation.
  • a server system configured with one or more artificial intelligence (AI) algorithms and/or machine learning (ML) algorithms, the server system comprising scalable infrastructure, which includes a non-transitory storage device having embodied therein one or more routines; and one or more processors coupled to the non-transitory storage device and operable to execute the one or more modules, wherein the one or more modules include a data storage module, which stores the video or series of one or more images with the desired objects, dynamic events of interest in form of raw data or verified data; a processing module for breaking down an incoming stream of data packets, which is stored as raw data, by processing into smaller packets and marking data points a transmitter module, which when executed by the one or more processors , transmits the smaller data packets to a network of users for verification of objects or the obtained dynamic events and/or objects of interest from one or more servers; and a receiver module, which receives the verified dynamic events and/or object of interest from electronic mobile devices remotely from a user or the network of users and stores in the data storage module.
  • AI
  • one or more servers are configured to perform a series of AI based inferences on the obtained combination of objects or the obtained dynamic events of interest and categorize them into data points.
  • Figure 1 illustrates the flow of data through a general AI workflow/pipeline containing one or more AI work steps working in sequential order to produce dynamic events and/or objects of interest, according to an embodiment of the present invention.
  • Figure 1B illustrates the flow of data through the lane change workflow/pipeline, according to an embodiment of the present invention.
  • Figure 2A illustrates the use of AI models on mobile electronic devices to assist specialists involved in the annotation process to obtain the labels for data, according to an embodiment of the present invention.
  • Figure 2B illustrates the use of AI models on mobile electronic devices to assist specialists involved in the annotation process to obtain the crowdsourced labels for data, according to an embodiment of the present invention.
  • Figure 3 illustrates the use of AI on mobile electronic devices to identify dynamic events and/or objects of interest and automate the process of data collection, according to an embodiment of the present invention.
  • Figure 4 illustrates the training system setup, wherein, the training manager service starts training models when triggered, according to an embodiment of the present invention.
  • Figure 5 illustrates the hybrid system of combining human and the machine intelligence, according to an embodiment of the present invention.
  • Figure 6 illustrates the use of Keda to auto-scale the AI workflow/pipeline processing Kubernetes pods to handle the amount of data coming into the server, according to an embodiment of the present invention.
  • Figure 7 illustrates the storage plan for OSB - One Step Button AI Retraining system, according to an embodiment of the present invention.
  • Figure 8A illustrates the flow of data in the system, according to an embodiment of the present invention.
  • Figure 8B illustrates the flow of data in the LIFO/backpropagation technique, according to an embodiment of the present invention.
  • Figure 8C illustrates the flow of data in the annotation network, according to an embodiment of the present invention.
  • Figure 9 illustrates the use of AI processing directly on mobile electronic devices to detect dynamic events and/or objects of interest and enable crowdsourced data collection, according to an embodiment of the present invention.
  • Figure 10 illustrates the flow of data through AI work steps to produce dynamic events and/or objects of interest and the validation and annotation processing on false AI detections resulting in training data for the AI models.
  • Figure 11 illustrates the AI Retraining Engine used for model training and retraining, according to an embodiment of the present invention.
  • Figure 12 illustrates the inference engine for AI processing, wherein the models generated from the training process are used, according to an embodiment of the present invention.
  • Figure 13 illustrates the weighted backpropagation technique, which is used to provide data importance weights to data samples, according to an embodiment of the present invention.
  • Figure 14 illustrates the flow of the Smart Data Collection, Annotation Network, LIFO/Backpropagation, and AI Training combined together, according to an embodiment of the present invention.
  • Figure 15 illustrates the use of AI processing on mobile electronic devices for smart data collection, according to an embodiment of the present invention.
  • Figure 16A illustrates the True Positive Detections by AI models, according to an embodiment of the present invention.
  • Figure 16B illustrates the False Positive Detections by AI models, according to an embodiment of the present invention.
  • Figure 16C illustrates the False Negative Detection by AI models, according to an embodiment of the present invention.
  • Figure 17A illustrates the flow of data through the workflow/pipeline for detecting seatbelt not worn violation events, showing the work steps filtering out irrelevant data at each step, to produce true events, according to an embodiment of the present invention.
  • Figure 17B illustrates the forward cycle through the workflows/pipelines comprising of work steps to detect dynamic events and/or objects of interest, according to an embodiment of the present invention.
  • Figure 17C illustrates the backward cycle through the workflows/pipelines comprising of work steps and event validation to collect the corrected data labels in database, according to an embodiment of the present invention.
  • Figure 18 illustrates the use of LIFO/backpropagation technique to gather corrected data labels in the database and automatically trigger training of AI models, according to an embodiment of the present invention.
  • Figure 19A illustrates the use of optical flow to not collect data when no movement is detected in the scenes, in the smart data collection process, according to an embodiment of the present invention.
  • Figure 19B illustrates the use of optical flow to collect data when movement is detected in the scenes for the smart data collection process, according to an embodiment of the present invention.
  • Figure 20 illustrates the use of different AI models downloaded on mobile electronic devices over different zones or geographical regions, according to an embodiment of the present invention.
  • Figure 21 illustrates the sampling process of AI models on mobile electronic devices for smart data collection, according to an embodiment of the present invention.
  • Figure 22 illustrates the use of interpolation techniques to determine object locations over video frames, according to an embodiment of the present invention.
  • Figure 23 illustrates the AI Retraining process which produces models for different production environments in one go, according to an embodiment of the present invention. DETAILED DESCRIPTION OF THE DRAWINGS [0078] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure.
  • the present disclosure relates to artificial intelligence (AI) and neural networks. More specifically relates to a system and method of LIFO/backpropagation in an autonomous hybrid crowdsourced an autonomous crowdsourced on-demand AI powered stewardship network.
  • AI Artificial
  • An electronic network configured to verify, retrain, and improve the accuracy of the one or more Artificial Intelligence (AI) models in a pipeline in the shortest time frame using evaluation of the outputs of the AI models starting with the last AI model output in the pipeline going backwards towards the output of the first AI model in the pipeline, where the said electronic network comprising a non-transitory storage having embodied therein one or more routines ; one or more processors coupled to the non-transitory storage and operable to execute the one or more modules, wherein the one or more modules include: a transmitter module, which when executed by the one or more processors, transmits the obtained combination of objects or the obtained dynamic events of interest to one or more electronic mobile devices; a receiving module, which receives the continuously updated object or dynamic events of interest from the electronic mobile device remotely; an electronic mobile device, which displays the data received from the network and allows a specialists or an auditors to review the data and correct any wrong data, wherein the electronic mobile device comprising one or more tools to perform one or more of operations using AI
  • a specialized graphical processing unit to execute one or more AI models in a pipeline and one or more pipelines to detect dynamic events and/or objects of interest for one or more problem verticals.
  • the annotation data from AI models are stored in a memory as an intermediate output, wherein the intermediate output is retrieved and sent for verification to specialists or auditors and saved back to the memory.
  • the output from AI models closest to the event identification/faults/ problem is sent first for verification using bottom-up propagation in accordance with the requirements of the system
  • the dynamic events of interest and/or objects of interest are related to detection of relevant information or object in a vertical of a workflow/pipeline.
  • the AI models are neural network models used for tasks involving segmentation, classification, character recognition, object detection, and identification
  • the improvement of the segmentation based AI model is performed by annotation by a specialist or auditor on every incorrect result by the AI model, thereby allowing retraining of the AI models.
  • the trained model outputs are tested with previously trained AI models in A/B testing.
  • the retraining of the models is performed using the teacher-student training technique, automated hyperparameter tuning, and data-sampling weighting, resulting in faster and more accurate models.
  • a method to verify, retrain, and improve the accuracy of an artificial neural network (AI) based systems for solving various problem verticals in a workflow/pipeline in the shortest time frame using LIFO/backward propagation comprising: optimization of the retraining of AI models based on proximity to end results, wherein the end results are detection of faults or problems in the closet/last vertical, composed of a plurality of AI models; a crowdsourcing network wherein specialist or auditors and data collectors work in coherence to achieve dynamic events of interest and retraining of AI models, wherein the dynamic events of interest is a priority problem/ problem vertical defined by the design requirements of the system; wherein the detections incorrectly marked by artificial intelligence engines are sent for annotating by specialists and auditors for AI retraining and deployment.
  • AI artificial neural network
  • An electronic mobile device to verify, retrain, and improve the accuracy of an artificial neural network (AI) based systems comprises: one or more storage devices having embodied therein one or more routines performing AI processes; one or more processors coupled to the one or more storage devices and operable to execute the one or more modules, wherein the one or more modules include: a data capture module, to identify and automatically captures the images or videos or region of interest related to problem vertical according to pre-defined rules ; a receiving module, which receives the continuously updated objects and/or the dynamic events of interest on the electronic mobile device from the capture module; a selection module to select the received images or videos on the electronic mobile device related to problem vertical; an annotation module to let a user annotate and label the selected images or videos on the electronic mobile device; a data storage module, which stores the annotated images or video or series of one or more images or videos with the desired objects and/or the dynamic events of interest; transmitter module, which when executed by the one or more processors, transmits the annotated or
  • a data capture module to
  • the electronic mobile device utilizes a camera and other sensors available to triggers data gathering after identifying dynamic events and/or objects of interest as required by the problem verticals.
  • the electronic mobile uses one or more AI models in different parallel and sequential combinations to determine dynamic events and/or objects of interest.
  • the electronic mobile device uses Optical Flow to determine if the scene is in motion before the scene is processed using AI processing, optimizing the use of resources and AI inferencing for gathering the needed information for the problem verticals of detecting dynamic events and/or objects of interest.
  • the electronic mobile device uses different AI models that are downloaded in different geographical locations and identifies different dynamic events and/or objects of interest for problem verticals in these geographical zones
  • AI processing in electronic mobile device is used to enhance the annotations of the specialists and auditors by snap-fitting the drawn bounding boxes to the dynamic events and/or objects of interest, reducing the manual effort for the specialists and auditors in the annotation process.
  • AI processing in electronic mobile device is used as a first pass annotator and the human annotation network is used to correct the AI annotations, reducing the manual effort for the specialists and auditors in the annotation process.
  • the data collected using AI processing in electronic mobile device feeds to the workflows/pipelines in the system for the problem verticals, and the annotations enable AI model retraining.
  • the data collectors and specialists or auditors are divided into classes depending upon the quality and the aptness of the data, providing annotations for providing a measure of importance for the data to be processed, enabling optimization of resources to for processing of AI model inference in the one or more pipelines.
  • the electronic mobile device utilizes one or more image sensors which may be front and back camera sensors, present in the electronic mobile devices to gather data after identifying relevant information of interest.
  • the temperature of electronic mobile devices is monitored to control the sampling rate for AI processing, thus optimizing the performance over the electronic mobile device resources.
  • An aspect of the present disclosure relates to artificial intelligence (AI).
  • AI artificial intelligence
  • the present disclosure pertains to an artificial intelligence based system and method for detecting of relevant dynamic events and/or objects of interest, and provide a workflow/pipeline to the specialists and auditors to provide annotations and train the neural network based on such annotations accordingly.
  • AI artificial intelligence
  • ML machine learning
  • the system includes one or more processors coupled with a to the non-transitory storage device and operable to execute the one or more modules.
  • the objective of this disclosure is to provide a method in a distributed network comprising one or more artificial intelligence (AI) algorithms, the network comprising a scalable infrastructure to execute one or more problem verticals by executing workflows/pipelines and worksteps that are included in the problem verticals.
  • AI artificial intelligence
  • One or more problem verticals define a problem to be solved or can be based on an industry type.
  • the worksteps define the sub-problems in the problem vertical.
  • the method incorporates the capabilities of neural networks as well as scalability for its operation.
  • AI artificial intelligence
  • ML machine learning
  • the system includes one or more processors coupled to a storage device and operable to execute the one or more modules.
  • the one or more routines includes an obtaining/ data collecting module, which when executed by the one or more processors, collects, based on the one or more AI and/or ML algorithms, at least a combination of objects and/or dynamic events of interest from the one or more cameras as explained above, wherein said combination of objects or dynamic events of interest comprises at least an image having one or more identifiable objects, and a transmit module, which when executed by the one or more processors, transits the obtained combination of objects or the obtained dynamic events of interest to the one or more servers.
  • the obtained images are associated with problem verticals, pipelines/workflows and worksteps and assigned to the network of specialists and workers for verification and annotation.
  • a server system configured with one or more artificial intelligence (AI) algorithms and/or machine learning (ML) algorithms, the server system comprising scalable infrastructure, which includes a non-transitory storage device having embodied therein one or more routines; and one or more processors coupled to the non-transitory storage device and operable to execute the one or more modules, wherein the one or more modules include a data storage module, which stores the video or series of one or more images with the desired objects, dynamic events of interest in form of raw data or verified data; a processing module for breaking down an incoming stream of data packets, which is stored as raw data, by processing into smaller packets and marking data points a transmitter module, which when executed by the one or more processors , transmits the smaller data packets to a network of users for verification of objects or the obtained dynamic events of interest from one or more servers; and a receiver module, which receives the verified object or dynamic events of interest from electronic mobile devices remotely from a user or the network of users and stores in the data storage module.
  • AI artificial intelligence
  • ML
  • optical flow or optic flow is the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer and a scene.
  • Optical flow can also be defined as the distribution of apparent velocities of movement of brightness pattern in an image.
  • the field of optical flow has made significant progress by focusing on improving numerical accuracy on standard benchmarks. Flow is seen as a source of input for tracking, video segmentation, depth estimation, frame interpolation, and many other problems.
  • the use of AI in this process mimics the human intelligence of deciding when to capture the data, thus automating the whole process.
  • the intelligence in such cases can be of the form of multiple AI models trying to identify information of relevance such as a person in a vehicle.
  • This detection process is represented by workflows/pipelines comprising of steps that are downloaded on one or more mobile devices from the backend servers.
  • This approach allows for data collection in case of highly specific constraints on the dynamic events and/or objects of interest, such as vehicles that should be of a specific brand and more.
  • the intelligence in such cases also extends to using Optical Flow for detecting movement in scenes, allowing to capture relevant data.
  • the annotation network covers various annotation various modes such as LeftMulti- Crop ⁇ mode allowing to draw bounding boxes over objects in a frame, whileClassify ⁇ mode allowing to input a class for an image, althoughPaint ⁇ mode allowing to draw segmentation masks for objects of interests. These modes provide labels in the correct format to train AI models.
  • the deployment of an annotation network in a mobile application on readily available smartphones allows for labeling data with ease and highly scaling the same. Crowdsourcing the labeling procedure also enables verification of labels by vetting them through multiple specialists or workers, ensuring the quality of labels. [0117] However, human capital is expensive, and asking specialists to label each object on every image would be cost-intensive.
  • an annotation module is provided in the scalable crowdsourced network system to perform one or more of operations on the images using AI.
  • the annotation tool comprises capability to annotate, draw, label and classify the received one or more dynamic events and/or objects of interest from the remote server.
  • the present disclosure further relates to artificial intelligence (AI) and neural networks for detecting relevant dynamic events and/or objects of interest for different problem verticals.
  • AI artificial intelligence
  • the tasks of detecting motion violations such as in traffic management and other relevant dynamic events like sign detection, crop monitoring, infrastructure monitoring and more can be achieved by mimicking human intelligence through a host of AI models used in a parallel and sequential combinations in workflows/pipelines.
  • LIFO/Backpropagation is a technique developed to efficiently collect data for training underlying AI models in the workflows/pipelines while ensuring minimization of costs inferred using human intelligence to validate the AI responses.
  • An aspect of the invention is the validation of AI detections from the last step of the workflow/pipeline back to the first.
  • the data is sent in a forward cycle through the workflow/pipeline and then later on in a backward cycle.
  • the backward validation cycle through the specialist workforce ensures that the step producing the incorrect detections is identified.
  • the labeling of the data point through the crowd-sourced annotation network then provides the data point with the correct label for future training of that model.
  • This technique is thus similar to the method of training neural network models where the gradient of the network ⁇ s response with the ground truth is propagated through the last layer to the first layer.
  • the LIFO/backpropagation technique developed can scale the training to not only one neural network but a host of models comprising the entire workflow/pipeline and multiple workflows/pipelines running simultaneously.
  • the LIFO/backpropagation technique prioritizes the validation of the frames/data over which AI has produced annotations to trim out the false positives first as compared to the data where AI didn ⁇ t detect dynamic events and/or objects of interest i.e. possible cases of false negatives.
  • the system of OSB - One Step Button trains the models using the data from their respective work steps. This training procedure can be triggered manually on demand. Training routines using the data collected from the LIFO/backpropagation technique such as training after every week, or when the data collected match threshold restrictions, thus also allowing for automated retraining of the AI models.
  • the training of each model uses an overall Gradient Descent optimization enforcing the AI models to correct their erroneous predictions by matching with the labels gathered through the annotation network.
  • the trigger of the training AI model starts the process by splitting the collected data into training and validation splits, wherein, the performance of the model on the validation split is taken as a measure of the model's training progress.
  • training with different hyperparameters such as model depth, activation functions, model architecture, number of epochs, batch size and more allow for experimenting with the model and finding the settings which reach a minimum in the loss function curve.
  • K-fold validation splits the data is split into K-sets where the model is trained on all sets but one, using the last one for validation and then repeating this procedure K times, selecting a different set each time.
  • Hyperparameter tuning allows experimenting with model parameters which help find the model having the lowest minima in the loss curve on the data and effectively train the AI model.
  • the training routine triggers and hyperparameter tuning provide an automated way of training AI models. These AI models are then uploaded to the server for management and deployment in production systems.
  • the LIFO/backpropagation technique coupled with the crowdsourced data capturing and annotation network provides the training data for the AI models and the automated training triggers then result in trained AI models. But an important decision then becomes to distinguish between two training versions. For this analysis, a stand-out test set is developed and the performance of the models is analyzed on this test set.
  • the model which has the highest performance is then deployed to the production system.
  • the A/B testing module is then triggered on-demand and once after every training request is completed to identify the model with the highest accuracy. Completing this process of training and A/B testing thus ensures the continuous delivery of an improved model in the production system. This comes as a CI/CD pipeline for the AI models, reducing the dependency on experts.
  • the automated setup thus reduces the cost of developing AI workflows/pipelines for various problem verticals as taken in this disclosure.
  • a marketplace of data is available to end users to subscribe for different dynamic events and/or objects of interest for varied problem verticals, and wherein the data collectors can trigger data collection for these dynamic events and/or objects of interest on demand using the scout mode in the electronic mobile device.
  • the scout mode shows various models/events of interest for the users to select from. The selected events are then automatically triggered upon detection by the AI models on the mobile device or are manually triggered by users / data collectors.
  • the benefit of the scout mode is to allow a human to select the data points / events of interest that they are more comfortable in capturing or events of interest that the particular users are finding more readily in their environments.
  • the events of interest are marked with a label / meta tag on the video / series of images for faster distribution to various AI models through the network or the mobile devices.
  • models with many layers or very large depth outperform smaller models greatly. While the greater accuracy would help produce correct dynamic events and/or objects of interest reducing the manual verification workload, the huge size of these models increases the latency in producing AI detections. This reduces the throughput greatly thus not meeting the objectives of speed. In contrast, the smaller models have lower accuracy but have a much higher throughput. Thus there is the need to improve the accuracy of smaller models to provide both high speed and high accuracy.
  • this allows having a turbo mode on the production system when the load is high to optimize for speedy AI detections.
  • utilizing the weights of a previously trained model acts as a good initialization point for the network allowing the AI to utilize and transfer the previously trained knowledge and update it with new information. This overcomes the need to store all the data from the first training cycle to the present one. This reduces the costs associated with the training system achieving AI training in minimized costs.
  • This weighted training technique thus allows the model to tune itself for the incoming data and get the highest improvements from the same.
  • these techniques work together resulting in an automated manner of training AI models, with boosting model performance in training cycles and delivering them to production systems, totally reducing the manual workload and expert intervention.
  • the One-Step-Button system trains models for varied production environments in one go, including quantized models for electronic mobile devices, small student models for edge devices having small compute and complex, highly accurate models for server systems.
  • the figure 1A illustrates the flow of data through a general AI workflow/pipeline (1000) containing one or more AI work steps (1001, 1002, 1004, and 1006), working in a sequential order to produce relevant dynamic events and/or objects of interest (1008).
  • Each work step (1001, 1002, 1004, and 1006) communicates with the C3 workflow (1000) i.e. it receives data for processing from the C3 workflow (1000) and sends back the AI annotations (1003, 1005, and 1007).
  • the last work step (1006) produces an event (1008) upon event validation (1009).
  • the AI detections (1003, 1005, and 1007) are validated (1011, 1013, and 1015) from the last step (1006) to the first (1002). If the event is deemed valid according to a consensus between specialists in the event validation (1009) step, it is produced to an event map (1010) displaying the details. If the specialists deem that the AI was incorrect, then the specialists are asked to correct (1012, 1014, and 1016) the AI annotations (1003, 1005, and 1007).
  • the LIFO/Backpropagation (1017) technique thus allows the AI workflow/pipeline (1000) to be used in the production environment to detect the dynamic events and/or objects of interest (1008).
  • the figure 1B illustrates the flow of data through the lane change workflow/pipeline (1202).
  • the goal of the workflow/pipeline (1202) is to detect the violation event (1207) when a vehicle has missed giving an indicator while changing the driving lane.
  • the first step uses a Vehicle Detector and Tracker (1201) model to find out distinct vehicles in the data.
  • the records created by the workflow/pipeline (1202) for these vehicles are then sent in a forward cycle to the Lane Detector (1203) step. This step generates the result (1204) on whether the vehicle has changed driving lanes or not, by mapping the position of vehicles with respect to the lane.
  • the next step of Indicator Detection and Classification (1205) first detects the taillights in the vehicles and then classifies their state, producing indicator sequence result GIF (1206). Upon detecting the violation, the AI workflow/pipeline generates an event (1207) that is sent for validation (1208). If the event (1207) is deemed valid after the validation step (1208), it is produced to event map (1209). Using the LIFO/backpropagation (1200) technique, the AI detections (1210, 1212) are validated (1211, 1213) and corrected, in case of wrong AI detections.
  • the figure 2A illustrates the use of AI models (2103) to assist specialists involved in the annotation process on the electronic mobile devices (2102) to obtain the labels for data.
  • the AI model (2103) is able to snap-fit to the edges (2104). This allows the specialist to draw a rough bounding box (2101) over the object (2100), without having to painstakingly mark a tight bounding box (2104). This process reduces the manual effort involved in the labeling procedure.
  • the figure 2B illustrates the use of AI models (2202) on mobile electronic devices (2201) to assist specialists involved in the annotation process to obtain the crowdsourced labels for data.
  • the AI models are downloaded on the devices (2201) and run directly on the device (2201) to produce annotations (2204, 2205) on the images (2200).
  • These AI annotations (2204, 2205) are shown on the output image (2203) to specialists or auditors to verify and correct. This reduces the effort of human capital required and speeds up the annotation process.
  • the figure 3 illustrates the use of AI workflows/pipelines, such as vehicle detector workflow (3100) on mobile electronic devices to identify dynamic events and/or objects of interest, in this case, vehicle and trucks (3107, 3112), and automate the process of data collection (3113).
  • vehicle detector workflow such as vehicle detector workflow (3100) on mobile electronic devices to identify dynamic events and/or objects of interest, in this case, vehicle and trucks (3107, 3112), and automate the process of data collection (3113).
  • the said figure shows the example of AI detecting vehicles, where specifically the AI is looking for vehicles (3107) and trucks (3112) with visible license plates.
  • These models (3101, 3104, and 3109) are downloaded directly from the server on the mobile device.
  • the backend server also provides some rules (3102, 3105, and 3110) on the detections about the height, width, and AI detection confidence (3103, 3108, 3106, and 3111). These rules enable filtering of data and pass only valid needed data.
  • a vehicle is detected and post-processing rules (3102) are applied to it.
  • the valid results are passed to the license plate detector model (3104, 3109).
  • the rules (3105, 3110) for the second model a trigger happens, allowing the mobile device to record data (3113).
  • AI models (3101, 3104, and 3109) are able to identify relevant dynamic events and/or objects, in this case, vehicles and trucks (3107, 3112) and automate the process of data collection (3113).
  • Scaling the application yields crowdsourced relevant data for the AI workflows/pipelines for various problem verticals.
  • the figure 4 illustrates the training system setup, wherein, the training manager service (4102) starts training models when triggered.
  • the Manager Service (4102) is responsible for spawning the training containers/pods (4104, 4105) and communicating to the workflows on the server (4100) about the training status. Using Kubernetes it manages the hardware and schedules training pods (4104, 4105).
  • a data worker pod (4103) runs which downloads the data from the DB (4101) collected through the LIFO/backpropagation technique disclosed in this document.
  • the data worker pod (4103) receives the annotations from the server in JSON format and is responsible for transforming them to the correct format required for training the models, like PNG for segmentation-based models, text files containing the bounding box coordinates for detection models.
  • the Manager Service (4102) then spawns the training container/pod (4104, 4105) based on the model type like segmentation trainer or detection trainer or trainer.
  • the trainer module uses K-fold splits of data to train, test and tune the models, to determine parameters for high performing models.
  • the training container/pod (4104, 4105) uses hyperparameter tuning and K-fold validation to result in the best performing model and communicates back to the manager service (4102).
  • the Manager Service (4102) is able to provide real-time updates to the main workflow server (4100) and log the training status.
  • the trained model is then saved in persistent storage (4107) and this information is propagated back to the main workflow server (4100).
  • A/B testing (4106) the final model is deployed in production to use again in the workflows/pipelines.
  • the figure 5 illustrates the hybrid system of combining human and machine intelligence. Detection of relevant dynamic events and/or objects of interest are performed using AI workflows/pipelines (5100), where a workflow includes one or more AI models working in different parallel and sequential combinations, localizing information from one step to another.
  • the detected events are first validated (5102) by specialists (5101) to filter out erroneous AI events from coming to live event maps (5103).
  • the validated events are sent out to live event maps (5103) for utilization by end customers.
  • This process automates the use of a crowdsourced annotation network to produce dynamic events and/or objects of interest.
  • the figure 6 illustrates the use of Keda to auto-scale the AI workflow/pipeline (6103) processing Kubernetes pods to handle the amount of data coming into the server.
  • the server obtains the data from various sources (6100). These sources (6100) are a part of the crowdsourced data collection system.
  • the server as denoted by the C3 system (6101), then sends the data for processing to AI workflows/pipelines using Kafka topics (6102).
  • the containers/work step pods (6104, 6106, and 6108) are kept to a minimum replication of 1.
  • the Keda tool which is monitoring the Kafka queues (6102, 6105, and 6107) scales the AI work step pods (6109, 6110, and 6111) to have more replicas. This enables the system to keep processing incoming data and producing dynamic events and/or objects of interest.
  • Keda auto-scales the pods (6109, 6110, 6111) back to their original setting of replicas 1 (6104, 6106, 6108). This procedure thus automates the handling of different load scenarios, allowing to meet the scale of the crowdsourced data coming in at different times through the data sources (6100).
  • the figure 7 illustrates the storage plan for OSB - One Step Button AI Retraining system.
  • the storage space (7100) is divided into permanent storage (7114, 7119) and temporary storage space (7101).
  • the temporary storage space (7101) is linked with the training request.
  • the temporary storage (7101) is purged.
  • the data worker After receiving a training request, the data worker first downloads the annotations (7117, 7118). These data points are divided into train data (7104, 7109) and test data (7105, 7110) based on train-test ratios (7102, 7103) in the temporary storage (7101).
  • the data worker also splits the annotations for them (7107, 7108, 7112, and 7113).
  • the training sample points or images (7116) are downloaded (7115) in the permanent storage (7114) which follows a cycling policy. Any files which are prepared while training such as interim model weights (7106, 7111), training logs are kept in temporary storage (7101). Once the training process completes, the final model weights (7124, 7126) and training summary (7125, 7127) are put in permanent storage space (7119).
  • the Model folder (7120) stores the model weights for all workflows (7122, 7123).
  • the model config (7121) stores the model parameters (7128). This eases the manageability of models and enables easy deployment of the final trained model weights (7124, 7126).
  • the figure 8A illustrates the flow of data in the system.
  • data records (8100) are created. These data records (8100) are sent to workflows/pipelines for annotation. When AI processing is enabled, these data records (8100) get in pending state (8101) to be worked upon by AI models. The data records (8100) are sent to the 'AI annotation in progress' (8103) when the AI system starts working on them. If after a step, the human enabled flag is set, then these detections are sent to validation (8105) to the crowdsourced annotation network. Otherwise, the AI detections from the step are sent to annotation pass through (8104) from which the server sends the data records (8100) to the next step.
  • the data records (8100) are directly sent to the annotation network for processing with the state of the data records (8100) set to annotation pending (8102).
  • the crowd-sourced annotation network is used in conjunction with AI processing in a hybrid manner to produce dynamic events and/or objects of interest.
  • the figure 8B illustrates the flow of data records (8200) in the LIFO/backpropagation technique.
  • specialists validate (8202) the AI model annotations (8201). This step is necessary to filter out the erroneous AI results.
  • the correct results are used to produce dynamic events and/or objects of interest (8205) whereas the incorrect AI result is sent to the specialist to annotate (8203) and provide correct labels for the data records.
  • the figure 8C illustrates the flow of data records in the annotation network.
  • the annotation network is divided amongst the specialists and auditors.
  • auditors are the human annotators which have undergone training for the annotation review and have demonstrated their work as a specialist before.
  • a data record that has annotation pending (8300) is first given to specialists for the annotation (8301). This record is then validated (8302) for which specialists provide their judgment (8303).
  • the records processed by specialists (8304) go for a manager review (8305) and/or validation.
  • the records on which manager validation is pending (8308) are then sent for manager judgment (8309). If the consensus of specialists is rejected by the auditors then the record is marked inconclusive (8306).
  • the record shifts to a manager-approved state (8307) and is used as training data in the system.
  • the records for which consensus is rejected by auditors are then given the option to add more specialists (8310) and if not, are annotated by auditors themselves (8311).
  • the data records which are pending to be annotated by auditors (8312) are then assigned to auditors for annotation (8313) and are validated by other auditors (8314).
  • the figure 9 illustrates the use of AI processing directly on mobile electronic devices (9101, 9104) to detect objects of interest, in this example billboards (9100, 9103), and enable crowdsourced data collection by data collectors (9102, 9105) in varied locations. In this manner, the application can scale to capture relevant data for AI processing.
  • the figure 10 illustrates the flow of data in the AI general workflows/pipelines and the use of the LIFO/backpropagation technique.
  • AI workflows/pipelines comprise one or more AI work steps (10100, 10101, 10102), wherein, the input data is passed to the first step (10100).
  • the positive AI detections (10115, 10116, and 10117) are propagated from one step to the next in the forward pass.
  • the last step (10102) of the workflow/pipeline produces AI detected events (10103).
  • These AI detected events (10103) are then passed to an event validation step (10104) conducted by a hybrid system. This step ensures filtering out the erroneous detections i.e. False Positive AI Detections (10121, 10122, and 10123).
  • the True Positive AI Detections as validated by the specialists or auditors are then released as dynamic events and/or objects of interest (10105).
  • the False Positives AI Detections (10121, 10122, and 10123) are then passed to the crowdsourced annotation network for validation (10106, 10107, and 10108) and annotation (10109, 10110, and 10111) of AI labels using the LIFO/backpropagation technique.
  • the data now flows from the last step (10102) to the first step (10100) ensuring correction of AI detections at the earliest level.
  • the corrected labels from the crowdsourced annotations are then used for training the models and are stored as the training data (10112, 10113, and 10114).
  • the False Positives (10121, 10122, and 10123) from each step are given a higher priority and are sent for correction first to the annotation network.
  • the Negative Detections (10118, 10119, 10120) here are the cases where AI didn ⁇ t detect any objects in the data record. This is thus the sum of False Negatives and True Negatives.
  • these negatives detections (10118, 10119, and 10120) are passed to the crowdsourced annotation network for correction.
  • the corrections supplement the training data (10112, 10113, and 10114) for the models.
  • the LIFO/backpropagation technique is used to collect training data (10112, 10113, and 10114) while also generating dynamic events and/or objects of interest (10105) in the workflows/pipelines.
  • the figure 11 illustrates the AI Retraining Engine used for model training in cycles. This represents the advanced techniques used to train the models, specifically, the use of Knowledge Distillation (11102) and Data Sampling through weighted backpropagation (11105). By mimicking the teacher model (11101, 11107), the student model (11103, 11109) learns the important features, resulting in a performance boost. This enables to use of a faster, smaller model (11103, 11109) to provide speedy AI detections at high accuracy.
  • the model utilizes the past weights as the initialization and experiments with different parameters such as batch size, training schedulers, and more to determine the settings leading to best performance. This permits only saving the data for the current training cycle and reduces the training time and costs associated with it. This process automates the model experimenting and training process, avoiding the need for AI expert intervention.
  • the data weight sampling is performed using weighted backpropagation (11105), where, weights are given to each data point separately. This allows the training process to give more weightage to data points which bring the biggest impact on the model ⁇ s performance.
  • the first cycle ⁇ s data (11100) is used to train the teacher model (11101) and then the student model (11103) through Knowledge Distillation (11102).
  • the model ⁇ s in the second cycle use transfer learning hypertuning (11104, 11110) on the models from cycle 1 by using them as the base to learn in cycle 2. This process improves the model's performance in this next cycle.
  • the AI Retraining Engine is thus able to improve model performance over training cycles.
  • the figure 12 illustrates the inference engine for AI processing, wherein the models generated from the training process are used.
  • the use of Knowledge Distillation in the training process results in a smaller, faster student model (12101).
  • This model is used to balance the load using the load switch (12100).
  • a turbo mode enables the use of the faster student model (12101), producing AI detections rapidly to the cloud server (12103) and meeting the high demand.
  • the processing framework can automatically switch back to the highly accurate teacher model (12102) for producing inferences to the cloud server (12103).
  • the Figure 13 illustrates the weighted backpropagation technique, which is used to provide data importance weights to data samples. In this case, the model from the previous cycle (13100) is first evaluated on the data for the present cycle (13101).
  • the samples on which AI model from the previous cycle (13100) failed or performed worse (13103) are given more weightage so as to let the AI models learn better.
  • the samples on which the AI model from the previous cycle (13100) performed better (13102) are given less weightage but are included to help the AI learn better features.
  • These sample weights are adjustable as training parameters and thus the experimenting and training process results in an improved model (13104) in this present cycle.
  • the figure 14 illustrates the flow of the Smart Data Collection (14111), Annotation Network (14114), LIFO/Backpropagation (14113), and AI Training (14115) combined together.
  • the Smart Data Collection (14111) framework enables crowdsourcing data gathering by using AI on mobile electronic devices (14100).
  • the AI models identify data containing relevant information and start capturing data which is then passed to the server.
  • the crowd-sourced data obtained from the Smart Data Collection (14111) framework is passed to AI workflows/pipelines (14112) for processing. These workflows/pipelines (14101, 14102, 14103) process the data using multiple AI models and produce AI detected events (14106).
  • the AI detected events and/or objects of interest (14106) are validated (14104) following the LIFO/Backpropagation (14113) framework.
  • the AI detections are passed to Annotation Network (14114) wherein specialists or auditors (14105) validate and provide corrected labels for the data.
  • the valid AI detections are passed as dynamic events and/or objects of interest (14106) and are available for the end customers to use.
  • the false AI detections with their corrected labels are used as training data (14107).
  • the training process is triggered based upon the number of data points collected by the LIFO/Backpropagation (14113) framework.
  • the training process (14108) is completed through the One-Step-Button framework (14115). After the training process (14108) is completed, the models undergo A/B testing (14109) to determine the best-performing model over the previous training cycles. This procedure happens on unseen test data which is used to quantify the model results.
  • the final model is then deployed (14116) back in the system, i.e.
  • the figure 15 illustrates the use of AI processing on mobile electronic devices for smart data collection. Firstly the user lands on the driver screen (15100) on the mobile electronic device. The detection of the user ⁇ s location (15101) starts a location validator task (15102) and sends the user ⁇ s location to the server (15103) to track the user's location.
  • the backend server (15104) returns a workflow/pipeline.
  • the application on the mobile electronic device starts the camera service (15105) and streams frames.
  • the device temperature (15109) is checked. If the temperature is above the threshold, then the frame is dumped (15106). If the temperature is below the threshold, then RAM/compute availability (15110) is checked on the mobile electronic device. If the computing is low then the current frame is dumped (15108). If the compute is available then the frame is prepared for the AI model, which involves resizing and preprocessing it (15111). The AI Model is then executed (15112) on the frame. If no detections are produced (15113), then the frame is considered junk and is dumped (15108).
  • the detections are produced (15114), these are validated with the rules (15115). If the detections don ⁇ t match with the rules, then the object is considered invalid (15116) and the frame is dumped (15108). If the detections are valid, then the workflow is checked for any further AI models (15117). In the case, of child models, the flow then crops out objects and preprocesses that for the next model (15118). Once the last step of the model has reached and valid AI detections are produced, the frame parsing is stopped (15121) and data collection is triggered (15120). The ten- second video is then recorded and saved (15119). This process of using AI on mobile devices thus allows smart data collection without having the specialist users collect the data manually. This automatic process is thus able to scale massively and enable huge crowdsource data collection.
  • the figure 16A illustrates the True Positive Detections by AI models.
  • the AI model is able to detect “Car” objects (16101, 16102) from the input image (16100) and provide the correct labels for them in the output frame (16103).
  • the AI model is confident in its detected results and since the results are correct, this is a true positive AI detection.
  • the figure 16B illustrates the False Positive Detections by AI models.
  • the AI model wrongly identifies “Bus” (16201, 16202) instead ofquaintCar” from the input image (16200).
  • the AI model is confident in its detected results on the output frame (16203), but since the results are incorrect, this is a false positive AI detection.
  • This figure 16C illustrates the False Negative Detection by AI models.
  • the AI model misses detecting “Cars” from the input frame (16300) and results in the output frame (16301) without any detections. This is a false negative AI detection since in this example the objects are indeed present but the AI result of no objects is incorrect.
  • the detections of AI models are used to create an event. Since the cost of sending erroneous detections by AI models as dynamic events and/or objects of interest are extremely expensive it becomes inevitable to increase the priority to validate the positive AI detections first. This validation process starts from the last step in the LIFO/backward propagation technique and thus is able to trim out the false positives.
  • the figure 17A illustrates the flow of data (17100) collected through the workflow/pipeline of detecting seatbelt not worn violation events (17105).
  • the workflow/pipeline is composed of multiple AI models used in work steps (17101, 17102, and 17103) where each work step localizes information and filters out irrelevant data.
  • the data (17100) is first processed through (17101).
  • This work step (17101) of Vehicle Detector receives all collected videos and filters out videos that don ⁇ t have any vehicles present.
  • the detected vehicles are then processed through the second work step (17102) of Person Detection, wherein the model only passes the vehicle data which have persons inside.
  • This data is then processed through the third work step (17103) of seatbelt segmentation.
  • the third work step (17103) detects whether a person is wearing a seatbelt or not and produces events in case the person is not wearing a seatbelt. These detected events are passed through an event validation step (17104).
  • the event validation step (17104) filters out the erroneous AI detections and is thus able to produce true events (17105). Since the prospective AI detected events need to be validated (17104) before sending out as true events (17105), it is highly crucial to reduce the quantity at this step. Breaking the workflow/pipeline in work steps (17101, 17102, and 17103) allows this data reduction. This ensures that the specialist or auditors involved in the event validation step (17104) need to do the least amount of manual work in order to produce true events (17105).
  • the figure 17B illustrates the forward cycle in a workflow/pipeline wherein data from an input data source (17200) flows in the workflow comprising of multiple work steps (17201, 17202, 17203), wherein the work steps are the smallest unit of processing using an AI model.
  • the data goes to the first work step (17201), then to the second work step (17202), and then to the third work step (17203).
  • the last work step (17203) produces prospective events which undergo an event validation step (17204). These valid AI detections are then produced as true events (17205) from the workflow/pipeline for the problem vertical.
  • the figure 17C illustrates the backward cycle in a workflow/pipeline for a problem vertical. The data from the forward cycle is validated at the event validation step (17300).
  • the false AI detections are then backpropagated from the last AI work step (17301) to the second AI work step (17303) and then to the first AI work step (17304) in the workflow/pipeline.
  • the annotation network provides the correct data labels for the false AI detections which are stored in the DB (17302, 1704, 17306) for the respective work steps.
  • the figure 18 illustrates the use of LIFO/backward propagation technique to gather corrected data labels through the annotation network and automatically trigger training of AI models.
  • the workflow/pipeline for the problem vertical is divided into AI work steps (18100, 18101, and 18102), where these work steps (18100, 18101, and 18102) use AI models to process data and produce prospective events.
  • the figure 19A illustrates the use of optical flow in the smart data collection process. In this case, the optical flow vector remains constant over frames (19100, 19101, and 19102) as no movement is detected, and hence the data gathering process is not triggered (19103).
  • the figure 19B illustrates the use of optical flow for the smart data collection process. In this example, the vehicles are moving as well as the camera source over the frames (19200, 19201, 19202), which leads to an increasing optical flow vector. This increasing optical flow vector triggers the data gathering process (19203). [0190] Optical flow thus allows to automatically trigger data capture when motion is detected in the scene.
  • the figure 20 illustrates the use of different AI models downloaded on mobile electronic devices over different zones or geographical regions. Allowing devices to download AI models from server systems based on geographical area allows the capture of relevant data from the interest areas.
  • zone 1 (20100) is used for detecting potholes
  • zone 2 (20101) is used for detecting garbage and petrol pumps
  • zone 3 (20102) is used for detecting garbage and petrol pumps
  • zone 3 (20102) is used for detecting seatbelt violations
  • zone 4 (20103) is used for detecting billboards.
  • This smart data collection process thus allows further control on the data collected from different geographical areas, providing refined relevant data from the areas, in an automated manner.
  • the figure 21 illustrates the sampling process of AI models on mobile electronic devices for smart data collection.
  • First workflows/pipelines are downloaded (21100).
  • the camera service is started (21101) on the mobile electronic device and the temperature is checked on the device (21102).
  • the temperature is checked with different thresholds, including below 42 degrees Celsius (21103), at 42 degrees Celsius (21104), at 43 degrees Celsius (21105), at 44 degrees Celsius (21106), and in between 44 and 50 degrees Celsius (21107).
  • These temperature checks then set the sampling rate of the AI models, specifically one second (21113), two seconds (21112), three seconds (21111), six seconds (21110), and ten seconds (21109) respectively for the previously defined temperature settings.
  • These sampling rates control the execution of AI models (21114).
  • the figure 22 illustrates the use of interpolation techniques in an annotation network to determine object locations over video frames. Interpolation uses the detections (22101, 22102, 22111, and 22110) on a start frame (22100) and an end frame (22109) made by specialists or auditors (22112) and is able to determine the location of the objects (22104, 22105, 22107, and 22108) on all the frames (22103, 22106) in between.
  • the figure 23 illustrates the AI Retraining process (23101) which produces models for different production environments in one go.
  • the training data (23100) is used to train models (23105), wherein quantized models (23106) are deployed on mobile electronic devices (23102), small student models (23107) are deployed in edge devices (23103) where compute is low, and complex, highly accurate models (23108) are deployed on large computer servers (23104) for processing of data in workflows/pipelines. This automates the delivery of models for the different scenarios in one go.
  • the combination of these techniques results in an automated way of training models in a highly cost and resource-efficient, rapid manner, and result in smaller, faster models.
  • the AI and human consensus based verification network has been architected in such a way as to allow any device with an interactive display to connect to it easily using an app and begin with the verification task.
  • a central server hosts all data which have been churned out by the AI system and are sent to individual devices which are connected to the network.
  • All these operations comprising of data validation and annotation are performed on a mobile. Since mobile phones are low compute devices and nearly every modern person is equipped with one, it becomes feasible to distribute the data amongst people having a mobile phone.
  • the android application can also be used to monitor the entire process of data being fetched from the server to being sent back to the servers for the AI system to be able to train on it and deploy it automatically.
  • the Crowdsourced data annotation method described above constitutes of three stages. The first and second two stages involve the validation of AI model predictions on the consensus application and the re-labeling or classification on the canvas application. The third step is basically the majority voting or implicit consensus logic that is applied to all data which is getting reviewed on these mobile applications.
  • the major advantage of having this distributed system on the mobile platform is that the annotation platform is readily available, inexpensive and easily deployable.
  • a server can include one or more computers operating as a web server, database server, or other type of computer server in a manner to fulfill described roles, responsibilities, or functions.
  • the various servers, systems, databases, or interfaces can exchange data using standardized protocols or algorithms, possibly based on HTTP, HTTPS, AES, public-private key exchanges, web service APIs, known financial transaction protocols, or other electronic information exchanging methods.
  • Data exchanges can be conducted over a packet-switched network, a circuit- switched network, the Internet, LAN, WAN, VPN, or other type of network.
  • Data transmission or data communication is the transfer of data over a point-to-point or point-to-multipoint communication channel.
  • Examples of such channels are copper wires, optical fibers, wireless communication channels, storage media and computer buses.
  • the data are represented as an electromagnetic signal, such as an electrical voltage, radio wave, microwave, or infrared signal.
  • the terms "configured to” and “programmed to” in the context of a processor refer to being programmed by a set of software instructions to perform a function or set of functions.
  • the following discussion provides many example embodiments. Although each embodiment represents a single combination of components, this disclosure contemplates combinations of the disclosed components. Thus, for example, if one embodiment comprises components A, B, and C, and a second embodiment comprises components B and D, then the other remaining combinations of A, B, C, or D are included in this disclosure, even if not explicitly disclosed.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • a general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine.
  • a processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • the steps of a method or algorithm described in connection with the exemplary embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two.
  • a software module may reside in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
  • An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium.
  • the storage medium may be integral to the processor.
  • the processor and the storage medium may reside in an ASIC.
  • the ASIC may reside in a user terminal.
  • the processor and the storage medium may reside as discrete components in a user terminal.

Abstract

The present document talks about a system that uses crowdsourcing techniques to improve Artificial Intelligence (AI) systems to perform tasks including traffic violation detection, infrastructure monitoring, brand analytics capturing, and more. The system includes a network of electronic mobile devices with AI capabilities allowing for smart data capturing, AI workflows/Pipeline/Vertical involving AI models working in series / parallel combinations to perform the aforementioned tasks, an annotation network involving artificial or human specialists to validate and correct AI annotations to provide data for training AI models, the Back-Propagation (BP) technique, or Last-In-First-Out (LIFO), to collect the corrected annotations while generating dynamic events and/or objects of interest and enable building the system in the shortest time frame, and One-Step-Button system to train, retrain and test models for production deployments. The document talks about the interplay of these techniques to ensure rapid, continuous, and automated improvement of the AI systems.

Description

CROWDSOURCING TECHNIQUES TO DEPLOY AND IMPROVE ARTIFICIAL INTELLIGENCE SYSTEMS CROSS-REFERENCE TO RELATED APPLICATION [0001] This application claims the benefit of the following co-pending patent application which is incorporated herein by reference in its entirety: US application number 17/347562 titled “On-demand Crowdsourcing techniques” filed on June 14, 2021, now pending, which is continuous in part application of US Application No. 16/699408 titled “Crowdsourced on- demand AI data annotation, collection and processing” Filed on November 29, 2019 now patent US11037024B1, issued on June 15, 2021 which is continuous in part application of US Application No. 16/416225 “crowdsourced on-demand artificial intelligence roadway stewardship system” filed on May 19, 2019, now abandoned, which is the continuous-in-part application of US Application No.15/919033 “On-Demand Artificial Intelligence and Roadway Stewardship System” filed on March 12, 2018, now patent US 10296794B2 issued on 21 May 2019 , which is the continuous-in-part application of US application number 15/689350 entitled “On-demand Roadway Stewardship System” filed on August 29, 2017, patented US 9916755 issued on 13 March 2018, and, which claims priority from provisional patent application serial number 62/437007, filed on Dec 20, 2016. FIELD OF DISCLOSURE [0002] The present disclosure relates to artificial intelligence (AI) and neural networks. More specifically the invention relates to a system and method of edge-computing for AI deployment and Back-Propagation (BP) or Last in First Out (LIFO) technique for improving AI accuracy, in a crowdsourced AI stewardship network. BACKGROUND [0003] The background description includes information that may be useful in understanding the present invention. It is not an admission that any of the information provided herein is prior art or relevant to the presently claimed invention, or that any publication specifically or implicitly referenced is prior art. [0004] United States Patent Publication US9390086B2 filed by Palantir Technologies Inc., describes a technique for a classification system with methodology for enhanced verification. In one approach, a classification computer trains a classifier based on a set of training documents. After training is complete, the classification computer iterates over a collection of unlabeled documents uses the trained classifier to predict a label for each unlabeled document. A verification computer retrieves one of the documents assigned a label by the classification computer. The verification computer then generates a user interface that displays select information from the document and provides an option to verify the label predicted by the classification computer or provide an alternative label. The document and the verified label are then fed back into the set of training documents and are used to retrain the classifier to improve subsequent classifications. In addition, the document is indexed by a query computer based on the verified label and made available for search and display. [0005] United States Patent Publication US10382300B2 filed by Evolv Technologies Inc., describes a platform for gathering real-time analysis, wherein the Data is received characterizing a request for agent computation of sensor data. The request includes required confidence and required latency for completion of the agent computation. Agents to the query are determined based on the required confidence. Data is transmitted to query the determined agents to provide analysis of the sensor data. Related apparatus, systems, techniques, and articles are also described. [0006] United States Patent application US20190171950A1 filed by Ai Company Inc., describes a method and a method and a system for auto learning, artificial intelligence (AI) applications development, and execution. Various applications or operations may be associated with training environment-agnostic AI models, automated AI app application performance monitoring, fault, quality and performance remediation through prediction of failures or suboptimal performance, privacy and secure AI training and inference mechanism for data and AI model sharing between untrusted parties, and building auto learning applications that can automatically learn and improve. [0007] United States patent application US20210342745A1 filed by Clarifai Inc., describes a service platform that facilitates artificial intelligence model and data collection and collection may be provided. Input/output information derived from machine learning models may be obtained via the service platform. The input/output information may indicate (i) first items provided as input to at least one model of the machine learning models, (ii) first prediction outputs derived from the at least one model's processing of the first items, (iii) second items provided as input to at least another model of the machine learning models, (iv) second prediction outputs derived from the at least one other model's processing of the second items, and (v) other inputs and outputs. The input/output information may be provided via the service platform to update a first machine learning model. The first machine learning model may be updated based on the input/output information being provided as input to the first machine learning model. [0008] United States Patent application US20190080461A9, filed by A9 com Inc., describes improvements to audio/video (A/V) recording and communication devices, including improved approaches to using a neighborhood alert mode for triggering multi-device recording, to a multi- camera motion tracking process, and to a multi-camera event stitching process to create a series of “storyboard” images for the activity taking place across the fields of view of multiple cameras, within a predetermined time period, for the A/V recording and communication devices. [0009] United States patent publication US9183580B2, filed by Digimarc Corp, describes methods and arrangements involving portable devices. One arrangement enables a content creator to select software with which that content should be rendered—assuring continuity between artistic intention and delivery. Another arrangement utilizes the camera of a smartphone to identify nearby subjects, and take actions based thereon. Others rely on near field chip (RFID) identification of objects, or on identification of audio streams (e.g., music, voice). Some of the detailed technologies concern improvements to the user interfaces associated with such devices. Others involve use of these devices in connection with shopping, text entry, sign language interpretation, and vision-based discovery. Still other improvements are architectural in nature, e.g., relating to evidence-based state machines, and blackboard systems. Yet other technologies concern use of linked data in portable devices—some of which exploit GPU capabilities. Still other technologies concern computational photography. A great variety of other features and arrangements are also detailed. [0010] United States Patent publication US10559202B2, filed by Intel Corp, describes an apparatus comprises a memory and a processor. The memory is to store sensor data captured by one or more sensors associated with a first device. Further, the processor comprises circuitry to: access the sensor data captured by the one or more sensors associated with the first device; determine that an incident occurred within a vicinity of the first device; identify a first collection of sensor data associated with the incident, wherein the first collection of sensor data is identified from the sensor data captured by the one or more sensors; preserve, on the memory, the first collection of sensor data associated with the incident; and notify one or more second devices of the incident, wherein the one or more second devices are located within the vicinity of the first device. [0011] An object and/or event of interest is the data information relevant to the problem vertical, wherein the problem vertical address infrastructure damage such as road conditions, traffic moving violations, brand knowledge assessment, availability of utilities such as gas and EV stations, and more, where this data information is utilized by the end customers. [0012] Society has been using manual observation on the field to capture these dynamic events and/or objects of interest, but addressing the massive scale of modern global cities is an enormous challenge. Artificial Intelligence software systems have been expanding into this domain by automating the detection of dynamic events and/or objects of interest and reducing the huge expense associated with a completely manual process. [0013] The detection of dynamic events and/or objects of interest(s) can be performed by using analysis by one or more AI models in different parallel and sequential combinations, where, at each step, a part of the target information is localized and is sent to the next step for further processing. For example, for detecting whether a driver is wearing a seatbelt or not, we can break down the problem by first detecting vehicles, then persons inside those vehicles, and then finally understanding whether the person is wearing a seatbelt or not. This way of localizing helps achieve the task and determine the dynamic events and/or objects of interest. [0014] However, AI models are not fully accurate, with false positive and false negative, creeping into the AI detections. Even on breaking the detection problem into a workflow/pipeline for the problem vertical, comprising of different work steps, and using good quality models for each step, the overall accuracy of the workflow/pipeline is much lesser than the perfect performing system required for real-world deployment. For example, a workflow/pipeline comprising four worksteps using four AI models with 90% accuracy has just a 66% overall accuracy. (overall accuracy = (0.9 ^ 4) = 0.6561 ~ 66%). In fact a 90% accuracy would require each model to be at 97.5% accuracy each. The situation gets more complicated and difficult when the numbers of AI models are increased even more. [0015] Though AI can help reduce costs as compared to manual inspection, using such models can produce many false detections which are extremely critical and can face plausible errors in different settings or situations. Hence it is extremely important that an evaluation network is present to make sure that the data flowing out of the system is not erroneous; and the individual models can be improved in the fastest and most efficient manner possible. [0016] An approach involving humans in the loop (specialists or Auditors) is employed to verify the annotations produced by AI. This hybrid approach of using humans and AI together ensure the correctness of the dynamic events and/or objects of interest detected. Yet validating every AI annotation at each work step, would lead to a high turnout time in producing these detections. The expensive human capital employed in the process would lead to the same issues as seen in the manual detection when it comes to scaling the system. Thus it is highly imperative to improve the accuracy of such AI stewardship systems in the smallest time frame. [0017] Accurate AI performance depends upon the quality and the scale of data used for training the system. This process involves gathering the data in large amounts by humans and then using domain experts to manually annotate the data for training. Manual methods of annotating visual data required a lot of manpower and time. This process leads to a high turnaround time for AI training and performance improvement. [0018] Thus, it may be appreciated from the above that there exists a need for a system and method of crowdsourced data labeling and to provide improved techniques on how a distributed network can streamline the process for training and retraining of AI models to continuously deliver improved performance in a rapid, efficient and automated manner. [0019] Traditional methods of data annotation include experts in the field of data mining going over large collections of data to understand good quality data which can be used as the ideal set of data for training a neural network. Since a neural network requires a large dataset / large number dataset of data to give our high confidence scores during training. we would need quality data to keep the performance of the neural network consistent. Feeding bad or incorrect data at any point to the neural network would hamper the accuracy of the model. [0020] The prior-art does not provide any technical or suggestions on how to automate the entire neural network training, testing, retraining, and re-deployment loop. It is a tedious task to monitor the accuracy of a particular model in production and then replace it with a new model for improved performance. Automating this task requires a streamlined flow of data from the present active classifier to the data reviewers, back to neural network for retraining, accuracy testing, and deployment for production. Traditional methods of model redeployment do not include a crowdsourced consensus for the task of data collection and annotation. Traditional methods rely much on data experts for data collection and model parameters monitoring. [0021] Machine Learning and AI methods for performing tasks such as object detection, object classification and segmentation aren‟t always perfectly accurate. In a production scenario certain cases might arise when the data fed to the system for inference might follow a distribution which doesn‟t provide perfect output. In such cases it is not affordable to give our wrong information which can be critical. Production scenarios can be generating vehicle violation information, monitoring infrastructure condition such as potholes on roads and relaying notifications, traffic monitoring and ground control etc. which are extremely critical and can face plausible errors in different settings. Hence it is extremely important that an evaluation network is present in order to make sure that the data flowing out of the system is not erroneous. [0022] With the advent of Artificial Intelligence in the modern technology era, classifying and detecting new objects visually in a scene is only limited by the amount of data available for training a particular neural network. Manual methods of annotating visual data required a lot of manpower and time. [0023] Whereas there is certainly nothing wrong with existing system, nonetheless, there still exists a need to provide an efficient, effective and reliable autonomous crowdsourced on-demand AI powered stewardship network that incorporates the capabilities of LIFO/backpropagation in the Artificial Intelligence systems for correcting the inaccuracies based on underlying an initial set of results, based on which further annotations are performed. SUMMARY [0024] This summary is provided to introduce a selection of concepts in a simplified form to be further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. [0025] In order to overcome the above problems and to fulfill the expectations of the customers, the objective of the present invention is to an efficient, effective and reliable autonomous crowdsourced on-demand AI powered stewardship network that incorporates the capabilities of LIFO or back-propagation in the AI networks for correcting the inaccuracies based on neural networks for underlying an initial set of results, based on which further annotations are performed. [0026] The present disclosure relates to artificial intelligence (AI) and neural networks. More specifically relates to a system and method of LIFO or backward propagation (back-propagation) in an autonomous hybrid crowdsourced autonomous crowdsourced on-demand AI powered stewardship network. [0027] An aspect of the present disclosure relates to artificial intelligence (AI). In particular, the present disclosure pertains to an artificial intelligence based system and method for detecting of relevant dynamic events and/or objects of interest, and provide a workflow/pipeline to the specialists and auditors to provide annotations and train the neural network based on such annotations accordingly. [0028] In an aspect of the present disclosure relates to an electronic device configured with one or more artificial intelligence (AI) algorithms and/or machine learning (ML) algorithms. The system includes a non-transitory storage having embodied therein one or more routines and one or more processors coupled with a non-transitory storage device and is operable to execute the one or more modules. [0029] In another aspect of the present disclosure relates to a scalable network system configured with one or more artificial intelligence (AI) algorithms and/or machine learning (ML) algorithms. The system includes one or more processors coupled to a storage device and operable to execute the one or more modules. The one or more routines includes an obtaining module, which when executed by the one or more processors, obtains, based on the one or more workflows/pipelines for problem verticals, wherein a workflow/pipeline comprises of worksteps, where the work step is the smallest step using an AI and/or ML algorithm, at least a combination of dynamic events and/or objects of interest from the one or more cameras as explained above, wherein said the combination of dynamic events and/or object of interest comprises at least an image having one or more identifiable objects, and a transmit module, which when executed by the one or more processors, transits the obtained combination the obtained dynamic events and/or objects of interest to the one or more servers and a receiving module, which receives the continuously updated the dynamic events and/or object of interest from the electronic mobile device remotely. The obtained images are associated with problem verticals, workflows/pipelines, and worksteps and assigned to the network of specialists and workers for verification and annotation. [0030] An electronic mobile device, which displays the data received from the network and allows specialists or auditors to review the data and correct any wrong data, wherein the electronic mobile device comprising one or more tools to perform one or more of operations using AI to annotate, draw, label and classify the received one or more dynamic events and/or objects of interest and transmit the annotated, drawn, labeled and/or classified objects, areas, dynamic events and/or objects of interest back to the network system for deep learning, re- training and AI inference improvement operations. [0031] The annotation data from AI models are stored in a memory as an intermediate output, wherein the intermediate output is retrieved and sent for verification to specialists or auditors and saved back to the memory. [0032] In another embodiment, output from AI models closest to the event identification/faults/ problem is sent first for verification using bottom-up propagation in a backward paradigm in accordance with the requirements of the system. [0033] The AI models are neural network models used for tasks involving segmentation, classification, character recognition, object detection, and identification. The improvement of the AI models including segmentation-based AI model is performed by annotation by a specialist or auditor on every incorrect result by the AI model, thereby allowing retraining of the AI models. [0034] In an aspect of the present disclosure, a technique of LIFO / backpropagation is developed to efficiently collect data for training underlying AI models in the workflows/pipelines while ensuring minimization of costs inferred using human intelligence to validate the AI responses and enable improving AI systems in the shortest time, without the need to replace the AI model with a new AI model. [0035] In the LIFO/backpropagation method, the validation of AI detections happens from the last step of the workflow/pipeline back to the first. Thus at first, the data is sent in a forward cycle through the workflow/pipeline and then later on in a backward cycle. The valid AI detections from the last step result in dynamic events and/or objects of interest and enable using AI systems in real-world scenarios. The negative AI detections are thus filtered out. Thus, in this manner, the LIFO/backpropagation technique uses crowdsourced techniques to vastly improve the accuracy of AI system. [0036] The backward validation cycle through the specialist workforce ensures that the step producing the incorrect detections is identified. The labeling of the data point through the crowdsourced annotation network then provides the data point with the correct label for future training of that model. [0037] This technique is thus similar to the method of training neural network models where the gradient of the network‟s response with the ground truth is propagated through the last layer to the first layer. In this way, The LIFO/backpropagation method technique developed can scale the training to not only one neural network but a host of models comprising the entire pipeline/workflow and multiple pipeline/workflows running simultaneously. [0038] In some embodiments, the objective of this disclosure is to provide a method in a distributed network comprising one or more artificial intelligence (AI) algorithms, the network comprising a scalable infrastructure to execute one or more problem verticals by executing workflows/pipelines and worksteps that are included in the problem verticals. One or more problem verticals define a problem to be solved or can be based on an industry type. The worksteps define the sub-problems in the problem vertical. The method incorporates the capabilities of neural networks as well as scalability for its operation. [0039] A server system configured with one or more artificial intelligence (AI) algorithms and/or machine learning (ML) algorithms, the server system comprising scalable infrastructure, which includes a non-transitory storage device having embodied therein one or more routines; and one or more processors coupled to the non-transitory storage device and operable to execute the one or more modules, wherein the one or more modules include a data storage module, which stores the video or series of one or more images with the desired objects, dynamic events of interest in form of raw data or verified data; a processing module for breaking down an incoming stream of data packets, which is stored as raw data, by processing into smaller packets and marking data points a transmitter module, which when executed by the one or more processors , transmits the smaller data packets to a network of users for verification of objects or the obtained dynamic events and/or objects of interest from one or more servers; and a receiver module, which receives the verified dynamic events and/or object of interest from electronic mobile devices remotely from a user or the network of users and stores in the data storage module. [0040] In an aspect, one or more servers are configured to perform a series of AI based inferences on the obtained combination of objects or the obtained dynamic events of interest and categorize them into data points. [0041] Various objects, features, aspects, and advantages of the present disclosure will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which numerals represent like features. [0042] The alternate use of the terms vertical or Pipeline or workflow refers to have the same meaning. These terms are used interchangeably in the document. BRIEF DESCRIPTION OF DRAWINGS [0043] The accompanying drawings are included to provide a further understanding of the present disclosure and are incorporated in and constitute a part of this specification. The drawings illustrate exemplary embodiments of the present disclosure and, together with the description, serve to explain the principles of the present disclosure. [0044] In the figures, similar components and/or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label with a second label that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label. [0045] The diagrams are for illustration only, which thus is not a limitation of the present disclosure, and wherein: [0046] Figure 1 illustrates the flow of data through a general AI workflow/pipeline containing one or more AI work steps working in sequential order to produce dynamic events and/or objects of interest, according to an embodiment of the present invention. [0047] Figure 1B illustrates the flow of data through the lane change workflow/pipeline, according to an embodiment of the present invention. [0048] Figure 2A illustrates the use of AI models on mobile electronic devices to assist specialists involved in the annotation process to obtain the labels for data, according to an embodiment of the present invention. [0049] Figure 2B illustrates the use of AI models on mobile electronic devices to assist specialists involved in the annotation process to obtain the crowdsourced labels for data, according to an embodiment of the present invention. [0050] Figure 3 illustrates the use of AI on mobile electronic devices to identify dynamic events and/or objects of interest and automate the process of data collection, according to an embodiment of the present invention. [0051] Figure 4 illustrates the training system setup, wherein, the training manager service starts training models when triggered, according to an embodiment of the present invention. [0052] Figure 5 illustrates the hybrid system of combining human and the machine intelligence, according to an embodiment of the present invention. [0053] Figure 6 illustrates the use of Keda to auto-scale the AI workflow/pipeline processing Kubernetes pods to handle the amount of data coming into the server, according to an embodiment of the present invention. [0054] Figure 7 illustrates the storage plan for OSB - One Step Button AI Retraining system, according to an embodiment of the present invention. [0055] Figure 8A illustrates the flow of data in the system, according to an embodiment of the present invention. [0056] Figure 8B illustrates the flow of data in the LIFO/backpropagation technique, according to an embodiment of the present invention. [0057] Figure 8C illustrates the flow of data in the annotation network, according to an embodiment of the present invention. [0058] Figure 9 illustrates the use of AI processing directly on mobile electronic devices to detect dynamic events and/or objects of interest and enable crowdsourced data collection, according to an embodiment of the present invention. [0059] Figure 10 illustrates the flow of data through AI work steps to produce dynamic events and/or objects of interest and the validation and annotation processing on false AI detections resulting in training data for the AI models. [0060] Figure 11 illustrates the AI Retraining Engine used for model training and retraining, according to an embodiment of the present invention. [0061] Figure 12 illustrates the inference engine for AI processing, wherein the models generated from the training process are used, according to an embodiment of the present invention. [0062] Figure 13 illustrates the weighted backpropagation technique, which is used to provide data importance weights to data samples, according to an embodiment of the present invention. [0063] Figure 14 illustrates the flow of the Smart Data Collection, Annotation Network, LIFO/Backpropagation, and AI Training combined together, according to an embodiment of the present invention. [0064] Figure 15 illustrates the use of AI processing on mobile electronic devices for smart data collection, according to an embodiment of the present invention. [0065] Figure 16A illustrates the True Positive Detections by AI models, according to an embodiment of the present invention. [0066] Figure 16B illustrates the False Positive Detections by AI models, according to an embodiment of the present invention. [0067] Figure 16C illustrates the False Negative Detection by AI models, according to an embodiment of the present invention. [0068] Figure 17A illustrates the flow of data through the workflow/pipeline for detecting seatbelt not worn violation events, showing the work steps filtering out irrelevant data at each step, to produce true events, according to an embodiment of the present invention. [0069] Figure 17B illustrates the forward cycle through the workflows/pipelines comprising of work steps to detect dynamic events and/or objects of interest, according to an embodiment of the present invention. [0070] Figure 17C illustrates the backward cycle through the workflows/pipelines comprising of work steps and event validation to collect the corrected data labels in database, according to an embodiment of the present invention. [0071] Figure 18 illustrates the use of LIFO/backpropagation technique to gather corrected data labels in the database and automatically trigger training of AI models, according to an embodiment of the present invention. [0072] Figure 19A illustrates the use of optical flow to not collect data when no movement is detected in the scenes, in the smart data collection process, according to an embodiment of the present invention. [0073] Figure 19B illustrates the use of optical flow to collect data when movement is detected in the scenes for the smart data collection process, according to an embodiment of the present invention. [0074] Figure 20 illustrates the use of different AI models downloaded on mobile electronic devices over different zones or geographical regions, according to an embodiment of the present invention. [0075] Figure 21 illustrates the sampling process of AI models on mobile electronic devices for smart data collection, according to an embodiment of the present invention. [0076] Figure 22 illustrates the use of interpolation techniques to determine object locations over video frames, according to an embodiment of the present invention. [0077] Figure 23 illustrates the AI Retraining process which produces models for different production environments in one go, according to an embodiment of the present invention. DETAILED DESCRIPTION OF THE DRAWINGS [0078] The following is a detailed description of embodiments of the disclosure depicted in the accompanying drawings. The embodiments are in such detail as to clearly communicate the disclosure. However, the amount of detail offered is not intended to limit the anticipated variations of embodiments; on the contrary, the intention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure as defined by the appended claims. [0079] Each of the appended claims defines a separate invention, which for infringement purposes is recognized as including equivalents to the various elements or limitations specified in the claims. Depending on the context, all references below to the "invention" may in some cases refer to certain specific embodiments only. In other cases, it will be recognized that references to the "invention" will refer to subject matter recited in one or more, but not necessarily all, of the claims. [0080] In the following description, numerous details are set forth. It will be apparent, however, to one skilled in the art, that the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention. [0081] The present disclosure relates to artificial intelligence (AI) and neural networks. More specifically relates to a system and method of LIFO/backpropagation in an autonomous hybrid crowdsourced an autonomous crowdsourced on-demand AI powered stewardship network. [0082] An electronic network and a method of LIFO/backpropagation to verify, retrain, and improve the accuracy of the one or more Artificial Intelligence (AI) models in a pipeline in the shortest time frame using evaluation of the outputs of the AI models starting with the last AI model output in the pipeline going backwards towards the output of the first AI model in the pipeline, where the said electronic network comprising a crowdsourcing network wherein specialists or auditors and data collectors work in coherence to achieve dynamic events and/or objects of interest and retraining of AI models; wherein the verification/validation of the problem statement / end result / event as well as the annotation is done by a method of consensus among the masses for higher levels of accuracy; an electronic mobile device, which displays the data received from the network and allows specialists or auditors to review the data and correct any wrong data; the electronic network sends for verification AI inferences based on proximity to `end results of an AI pipeline composed of a plurality of AI models; wherein the end results are detection of events / dynamic data, objects of interest; wherein the detections incorrectly marked by artificial intelligence engines are first validated, then corrected/annotated by specialists and auditors for AI retraining and deployment. [0083] An electronic network configured to verify, retrain, and improve the accuracy of the one or more Artificial Intelligence (AI) models in a pipeline in the shortest time frame using evaluation of the outputs of the AI models starting with the last AI model output in the pipeline going backwards towards the output of the first AI model in the pipeline, where the said electronic network comprising a non-transitory storage having embodied therein one or more routines ; one or more processors coupled to the non-transitory storage and operable to execute the one or more modules, wherein the one or more modules include: a transmitter module, which when executed by the one or more processors, transmits the obtained combination of objects or the obtained dynamic events of interest to one or more electronic mobile devices; a receiving module, which receives the continuously updated object or dynamic events of interest from the electronic mobile device remotely; an electronic mobile device, which displays the data received from the network and allows a specialists or an auditors to review the data and correct any wrong data, wherein the electronic mobile device comprising one or more tools to perform one or more of operations using AI to annotate, draw, label and classify the received one or more objects and/or dynamic events of interest and transmit the annotated, drawn, labeled and/or classified objects, areas, dynamic events and/or objects of interest back to the network system for deep learning, re-training and AI inference improvement operations. [0084] A specialized graphical processing unit to execute one or more AI models in a pipeline and one or more pipelines to detect dynamic events and/or objects of interest for one or more problem verticals. [0085] The annotation data from AI models are stored in a memory as an intermediate output, wherein the intermediate output is retrieved and sent for verification to specialists or auditors and saved back to the memory. [0086] The output from AI models closest to the event identification/faults/ problem is sent first for verification using bottom-up propagation in accordance with the requirements of the system [0087] The dynamic events of interest and/or objects of interest are related to detection of relevant information or object in a vertical of a workflow/pipeline. [0088] The AI models are neural network models used for tasks involving segmentation, classification, character recognition, object detection, and identification [0089] The improvement of the segmentation based AI model is performed by annotation by a specialist or auditor on every incorrect result by the AI model, thereby allowing retraining of the AI models. [0090] The trained model outputs are tested with previously trained AI models in A/B testing. [0091] The retraining of the models is performed using the teacher-student training technique, automated hyperparameter tuning, and data-sampling weighting, resulting in faster and more accurate models. [0092] A method to verify, retrain, and improve the accuracy of an artificial neural network (AI) based systems for solving various problem verticals in a workflow/pipeline in the shortest time frame using LIFO/backward propagation, the method comprising: optimization of the retraining of AI models based on proximity to end results, wherein the end results are detection of faults or problems in the closet/last vertical, composed of a plurality of AI models; a crowdsourcing network wherein specialist or auditors and data collectors work in coherence to achieve dynamic events of interest and retraining of AI models, wherein the dynamic events of interest is a priority problem/ problem vertical defined by the design requirements of the system; wherein the detections incorrectly marked by artificial intelligence engines are sent for annotating by specialists and auditors for AI retraining and deployment. [0093] An electronic mobile device to verify, retrain, and improve the accuracy of an artificial neural network (AI) based systems, the electronic mobile device comprises: one or more storage devices having embodied therein one or more routines performing AI processes; one or more processors coupled to the one or more storage devices and operable to execute the one or more modules, wherein the one or more modules include: a data capture module, to identify and automatically captures the images or videos or region of interest related to problem vertical according to pre-defined rules ; a receiving module, which receives the continuously updated objects and/or the dynamic events of interest on the electronic mobile device from the capture module; a selection module to select the received images or videos on the electronic mobile device related to problem vertical; an annotation module to let a user annotate and label the selected images or videos on the electronic mobile device; a data storage module, which stores the annotated images or video or series of one or more images or videos with the desired objects and/or the dynamic events of interest; transmitter module, which when executed by the one or more processors, transmits the annotated or labels data to the specialists and auditors in the artificial intelligence based network. [0094] The electronic mobile device utilizes a camera and other sensors available to triggers data gathering after identifying dynamic events and/or objects of interest as required by the problem verticals. [0095] The electronic mobile uses one or more AI models in different parallel and sequential combinations to determine dynamic events and/or objects of interest. [0096] The electronic mobile device uses Optical Flow to determine if the scene is in motion before the scene is processed using AI processing, optimizing the use of resources and AI inferencing for gathering the needed information for the problem verticals of detecting dynamic events and/or objects of interest. [0097] The electronic mobile device uses different AI models that are downloaded in different geographical locations and identifies different dynamic events and/or objects of interest for problem verticals in these geographical zones [0098] AI processing in electronic mobile device is used to enhance the annotations of the specialists and auditors by snap-fitting the drawn bounding boxes to the dynamic events and/or objects of interest, reducing the manual effort for the specialists and auditors in the annotation process. [0099] AI processing in electronic mobile device is used as a first pass annotator and the human annotation network is used to correct the AI annotations, reducing the manual effort for the specialists and auditors in the annotation process. [00100] The data collected using AI processing in electronic mobile device feeds to the workflows/pipelines in the system for the problem verticals, and the annotations enable AI model retraining. [0101] The data collectors and specialists or auditors are divided into classes depending upon the quality and the aptness of the data, providing annotations for providing a measure of importance for the data to be processed, enabling optimization of resources to for processing of AI model inference in the one or more pipelines. [0102] The electronic mobile device utilizes one or more image sensors which may be front and back camera sensors, present in the electronic mobile devices to gather data after identifying relevant information of interest. [0103] The temperature of electronic mobile devices is monitored to control the sampling rate for AI processing, thus optimizing the performance over the electronic mobile device resources. [0104] An aspect of the present disclosure relates to artificial intelligence (AI). In particular, the present disclosure pertains to an artificial intelligence based system and method for detecting of relevant dynamic events and/or objects of interest, and provide a workflow/pipeline to the specialists and auditors to provide annotations and train the neural network based on such annotations accordingly. [0105] In an aspect of the present disclosure relates to an electronic device configured with one or more artificial intelligence (AI) algorithms and/or machine learning (ML) algorithms. The system includes one or more processors coupled with a to the non-transitory storage device and operable to execute the one or more modules. [0106] In some embodiments, the objective of this disclosure is to provide a method in a distributed network comprising one or more artificial intelligence (AI) algorithms, the network comprising a scalable infrastructure to execute one or more problem verticals by executing workflows/pipelines and worksteps that are included in the problem verticals. One or more problem verticals define a problem to be solved or can be based on an industry type. The worksteps define the sub-problems in the problem vertical. The method incorporates the capabilities of neural networks as well as scalability for its operation. [0107] In an aspect of the present disclosure relates to a scalable network system configured with one or more artificial intelligence (AI) algorithms and/or machine learning (ML) algorithms. The system includes one or more processors coupled to a storage device and operable to execute the one or more modules. The one or more routines includes an obtaining/ data collecting module, which when executed by the one or more processors, collects, based on the one or more AI and/or ML algorithms, at least a combination of objects and/or dynamic events of interest from the one or more cameras as explained above, wherein said combination of objects or dynamic events of interest comprises at least an image having one or more identifiable objects, and a transmit module, which when executed by the one or more processors, transits the obtained combination of objects or the obtained dynamic events of interest to the one or more servers. The obtained images are associated with problem verticals, pipelines/workflows and worksteps and assigned to the network of specialists and workers for verification and annotation. [0108] A server system configured with one or more artificial intelligence (AI) algorithms and/or machine learning (ML) algorithms, the server system comprising scalable infrastructure, which includes a non-transitory storage device having embodied therein one or more routines; and one or more processors coupled to the non-transitory storage device and operable to execute the one or more modules, wherein the one or more modules include a data storage module, which stores the video or series of one or more images with the desired objects, dynamic events of interest in form of raw data or verified data; a processing module for breaking down an incoming stream of data packets, which is stored as raw data, by processing into smaller packets and marking data points a transmitter module, which when executed by the one or more processors , transmits the smaller data packets to a network of users for verification of objects or the obtained dynamic events of interest from one or more servers; and a receiver module, which receives the verified object or dynamic events of interest from electronic mobile devices remotely from a user or the network of users and stores in the data storage module. In an aspect, the one or more servers are configured to perform a series of AI based inference on the obtained combination of objects or the obtained dynamic events of interest, and categorize into data points. [0109] According to the above embodiments of this disclosure, optical flow or optic flow is the pattern of apparent motion of objects, surfaces, and edges in a visual scene caused by the relative motion between an observer and a scene. Optical flow can also be defined as the distribution of apparent velocities of movement of brightness pattern in an image. The field of optical flow has made significant progress by focusing on improving numerical accuracy on standard benchmarks. Flow is seen as a source of input for tracking, video segmentation, depth estimation, frame interpolation, and many other problems. It is assumed that optimizing for low EPE will produce flow that is widely useful for many tasks. EPE, however, is just one possible measure of accuracy and others have been used in the literature, such as angular error or frame interpolation error. While there is extensive research on optical flow, here we focus on methods that use deep learning because these can be trained end-to-end on different tasks with different loss functions. Applying learning to the optical flow problem has been hard because there is limited training data with ground truth flow. Early approaches used synthetic data and broke the problem into pieces to make learning possible with limited data. [0110] Data is essential for training accurate AI systems; where the scale of the data and cases of relevance covered influence the model. Hence, data collection becomes an important task. Using mobile electronic devices having sensors such as visual cameras for visual recognition tasks, we can crowdsourced massive real-world data collection. Using varied devices allows for capturing a vast range of data covering cases with different camera parameters, lighting conditions, picture angles, aspect ratios, zoom range, and more. The availability of such electronic devices like smartphones allows to highly scale the data collection. Crowdsourcing in this context provides the range to cover unique cases, while still providing the needed replication. These qualities of the data collection approach result in models which are more robust and can be used in production deployment systems after training on the dataset. [0111] Integrating AI models which can identify dynamic events and/or objects of interest such as vehicles, persons, billboards, and more, when employed on mobile devices, can result in trigger points, which can start the data capture process and upload data to servers for further processing. The use of AI in this process mimics the human intelligence of deciding when to capture the data, thus automating the whole process. [0112] The intelligence in such cases can be of the form of multiple AI models trying to identify information of relevance such as a person in a vehicle. This detection process is represented by workflows/pipelines comprising of steps that are downloaded on one or more mobile devices from the backend servers. This approach allows for data collection in case of highly specific constraints on the dynamic events and/or objects of interest, such as vehicles that should be of a specific brand and more. The intelligence in such cases also extends to using Optical Flow for detecting movement in scenes, allowing to capture relevant data. [0113] Using intelligence automates the process of data capturing thus freeing up the human resource used previously and allowing extension to more electronic devices such as dashcams. Using dashcams that can automatically identify the relevant data, drivers can capture data in a risk-free manner, allowing for large-scale deployment of the data capture system. [0114] Along with the scale of the data used for AI training, the quality of labels has a direct impact on the performance of the AI models. Correct labels or data annotations can help the models correctly identify dynamic events in production systems, whereas lower quality labels can have a regressive impact on the performance of the system. It is thus highly imperative to have a good quality labeling procedure. [0115] The annotation network covers various annotation various modes such as „Multi- Crop‟ mode allowing to draw bounding boxes over objects in a frame, „Classify‟ mode allowing to input a class for an image, „Paint‟ mode allowing to draw segmentation masks for objects of interests. These modes provide labels in the correct format to train AI models. [0116] The deployment of an annotation network in a mobile application on readily available smartphones allows for labeling data with ease and highly scaling the same. Crowdsourcing the labeling procedure also enables verification of labels by vetting them through multiple specialists or workers, ensuring the quality of labels. [0117] However, human capital is expensive, and asking specialists to label each object on every image would be cost-intensive. Using Artificial Intelligence as a first pass specialist, indicating the location/label of the object in the frame reduces the effort of human capital in the annotation process. This first pass AI annotation speeds up the labeling process as now a specialist‟s major task is to verify the correctness of the label instead of drawing it manually. This inclusion of intelligence by downloading the AI model on the mobile device and using it for annotations thus reduces the cost of the annotation network vastly. Moreover, as the model improves over time, the effort undertaken by the human capital involved in this data labeling process reduces. [0118] In yet some other embodiments, an annotation module is provided in the scalable crowdsourced network system to perform one or more of operations on the images using AI. The annotation tool comprises capability to annotate, draw, label and classify the received one or more dynamic events and/or objects of interest from the remote server. [0119] In some embodiments, while drawing anything on the image, when a human taps on the image, then a zoomed view of that tapped location is shown at top left corner to show what is underneath the finger. [0120] The present disclosure further relates to artificial intelligence (AI) and neural networks for detecting relevant dynamic events and/or objects of interest for different problem verticals. [0121] The tasks of detecting motion violations such as in traffic management and other relevant dynamic events like sign detection, crop monitoring, infrastructure monitoring and more can be achieved by mimicking human intelligence through a host of AI models used in a parallel and sequential combinations in workflows/pipelines. This flow can detect these dynamic events and/or objects of interest deriving from visual and other sensory data. [0122] These AI-enabled workflows/pipelines can be highly scalable and deployed in real- world cities for various tasks. Yet, AI systems tend to produce false positives and false negatives. Though AI systems greatly reduce costs, due to the shortcomings, they produce erroneous dynamic events and/or objects of interest in some cases as compared to the manual recognition process. A hybrid approach, combining human and machine intelligence, by using human capital to verify and validate the AI detections, ensures no incorrect dynamic events and/or objects of interest are produced, leading to deployment in real-world scenarios. [0123] LIFO/Backpropagation is a technique developed to efficiently collect data for training underlying AI models in the workflows/pipelines while ensuring minimization of costs inferred using human intelligence to validate the AI responses. [0124] An aspect of the invention is the validation of AI detections from the last step of the workflow/pipeline back to the first. Thus at first, the data is sent in a forward cycle through the workflow/pipeline and then later on in a backward cycle. The backward validation cycle through the specialist workforce ensures that the step producing the incorrect detections is identified. The labeling of the data point through the crowd-sourced annotation network then provides the data point with the correct label for future training of that model. [0125] This technique is thus similar to the method of training neural network models where the gradient of the network‟s response with the ground truth is propagated through the last layer to the first layer. In this way, the LIFO/backpropagation technique developed can scale the training to not only one neural network but a host of models comprising the entire workflow/pipeline and multiple workflows/pipelines running simultaneously. [0126] The LIFO/backpropagation technique prioritizes the validation of the frames/data over which AI has produced annotations to trim out the false positives first as compared to the data where AI didn‟t detect dynamic events and/or objects of interest i.e. possible cases of false negatives. Since the costs associated with producing an erroneous event are supremely higher than missing the detection of a few dynamic events, this prioritization of false positives over false negatives ensures that the model can take larger strides towards expected performance. The resulted improvement in the AI system thus reduces the manual workload achieving the two-fold objective. [0127] This prioritization of false positives over false negatives under the LIFO/backpropagation technique ensures that data is collected for each work step model, thus improving the overall accuracy of the workflow/pipeline for the problem vertical over the time frame. [0128] Unlike existing systems of developing datasets and then training models for production settings, the novel LIFO/backpropagation technique allows using AI models in real- world scenarios while collecting the data for each step for training purposes. [0129] The repetition of this cycle of data gathering through the LIFO/backpropagation technique and further AI model training delivers continuous improvements in the AI model stack, resulting in automated and rapid performance gains in the overall workflows/pipelines for the problem verticals. [0130] The technique applied to these workflows/pipelines using visual and other sensory data as their input, and going through one or more AI model work steps, producing dynamic events and/or objects of interest and validating the correctness of AI detections cost-effectively, and improving the AI models in the shortest time possible. [0131] The developed LIFO/backpropagation technique identifies the incorrect data for each work step as the data flows through the workflow/pipelines. This gathering of incorrect data allows triggering the model retraining cycle. The retrained model after undergoing an A/B testing routine is then pushed back to the production system, resulting in continuous rapid improvements in the overall workflow/pipelines. [0132] An aspect of the above, the system of OSB - One Step Button trains the models using the data from their respective work steps. This training procedure can be triggered manually on demand. Training routines using the data collected from the LIFO/backpropagation technique such as training after every week, or when the data collected match threshold restrictions, thus also allowing for automated retraining of the AI models. [0133] The training of each model uses an overall Gradient Descent optimization enforcing the AI models to correct their erroneous predictions by matching with the labels gathered through the annotation network. [0134] The trigger of the training AI model starts the process by splitting the collected data into training and validation splits, wherein, the performance of the model on the validation split is taken as a measure of the model's training progress. [0135] In this training aspect, training with different hyperparameters such as model depth, activation functions, model architecture, number of epochs, batch size and more allow for experimenting with the model and finding the settings which reach a minimum in the loss function curve. [0136] Using K-fold validation splits, the data is split into K-sets where the model is trained on all sets but one, using the last one for validation and then repeating this procedure K times, selecting a different set each time. [0137] Hyperparameter tuning allows experimenting with model parameters which help find the model having the lowest minima in the loss curve on the data and effectively train the AI model. [0138] After a one-time effort from an AI development expert for a new model, the training routine triggers and hyperparameter tuning provide an automated way of training AI models. These AI models are then uploaded to the server for management and deployment in production systems. [0139] The LIFO/backpropagation technique coupled with the crowdsourced data capturing and annotation network provides the training data for the AI models and the automated training triggers then result in trained AI models. But an important decision then becomes to distinguish between two training versions. For this analysis, a stand-out test set is developed and the performance of the models is analyzed on this test set. The model which has the highest performance is then deployed to the production system. [0140] The A/B testing module is then triggered on-demand and once after every training request is completed to identify the model with the highest accuracy. Completing this process of training and A/B testing thus ensures the continuous delivery of an improved model in the production system. This comes as a CI/CD pipeline for the AI models, reducing the dependency on experts. [0141] The automated setup thus reduces the cost of developing AI workflows/pipelines for various problem verticals as taken in this disclosure. [0142] A marketplace of data is available to end users to subscribe for different dynamic events and/or objects of interest for varied problem verticals, and wherein the data collectors can trigger data collection for these dynamic events and/or objects of interest on demand using the scout mode in the electronic mobile device. The scout mode shows various models/events of interest for the users to select from. The selected events are then automatically triggered upon detection by the AI models on the mobile device or are manually triggered by users / data collectors. The benefit of the scout mode is to allow a human to select the data points / events of interest that they are more comfortable in capturing or events of interest that the particular users are finding more readily in their environments. The events of interest are marked with a label / meta tag on the video / series of images for faster distribution to various AI models through the network or the mobile devices. [0143] With the advent of Deep Learning, models with many layers or very large depth outperform smaller models greatly. While the greater accuracy would help produce correct dynamic events and/or objects of interest reducing the manual verification workload, the huge size of these models increases the latency in producing AI detections. This reduces the throughput greatly thus not meeting the objectives of speed. In contrast, the smaller models have lower accuracy but have a much higher throughput. Thus there is the need to improve the accuracy of smaller models to provide both high speed and high accuracy. [0144] Using a high-performing accurate AI system as a „teacher‟ model for a smaller „student‟ model, where the target for the smaller model becomes to mimic the teacher model allows to effectively boost the performance of the smaller model. This way of distilling knowledge from the teacher model allows the student model to learn the much more complex patterns in the data which the student model would have failed to identify on its own. This advanced training process results in smaller models with higher accuracy which can be then deployed in the production system to have higher throughput. [0145] An aspect of the above, the smaller models with higher accuracy are the ideal engines for edge deployments on mobile electronic devices. These models thus enable the automated smart data collection and the annotation network improvements to the crowdsourcing effort. [0146] In an aspect of the above, this allows having a turbo mode on the production system when the load is high to optimize for speedy AI detections. [0147] In another technique, utilizing the weights of a previously trained model acts as a good initialization point for the network allowing the AI to utilize and transfer the previously trained knowledge and update it with new information. This overcomes the need to store all the data from the first training cycle to the present one. This reduces the costs associated with the training system achieving AI training in minimized costs. [0148] In another technique, providing tunable sample weights separately on each data point to allow the model to give more importance to data samples that have the maximum effect on the model and lower weights for the other data points. This weighted training technique thus allows the model to tune itself for the incoming data and get the highest improvements from the same. [0149] An aspect of the above, these techniques work together resulting in an automated manner of training AI models, with boosting model performance in training cycles and delivering them to production systems, totally reducing the manual workload and expert intervention. [0150] An aspect of the above system, the One-Step-Button system trains models for varied production environments in one go, including quantized models for electronic mobile devices, small student models for edge devices having small compute and complex, highly accurate models for server systems. [0151] The figure 1A illustrates the flow of data through a general AI workflow/pipeline (1000) containing one or more AI work steps (1001, 1002, 1004, and 1006), working in a sequential order to produce relevant dynamic events and/or objects of interest (1008). Each work step (1001, 1002, 1004, and 1006) communicates with the C3 workflow (1000) i.e. it receives data for processing from the C3 workflow (1000) and sends back the AI annotations (1003, 1005, and 1007). In the workflows/pipelines (1000), the last work step (1006) produces an event (1008) upon event validation (1009). Using the LIFO/backpropagation (1017) technique, the AI detections (1003, 1005, and 1007) are validated (1011, 1013, and 1015) from the last step (1006) to the first (1002). If the event is deemed valid according to a consensus between specialists in the event validation (1009) step, it is produced to an event map (1010) displaying the details. If the specialists deem that the AI was incorrect, then the specialists are asked to correct (1012, 1014, and 1016) the AI annotations (1003, 1005, and 1007). The LIFO/Backpropagation (1017) technique thus allows the AI workflow/pipeline (1000) to be used in the production environment to detect the dynamic events and/or objects of interest (1008). [0152] The figure 1B illustrates the flow of data through the lane change workflow/pipeline (1202). In this, the goal of the workflow/pipeline (1202) is to detect the violation event (1207) when a vehicle has missed giving an indicator while changing the driving lane. The first step uses a Vehicle Detector and Tracker (1201) model to find out distinct vehicles in the data. The records created by the workflow/pipeline (1202) for these vehicles are then sent in a forward cycle to the Lane Detector (1203) step. This step generates the result (1204) on whether the vehicle has changed driving lanes or not, by mapping the position of vehicles with respect to the lane. The next step of Indicator Detection and Classification (1205) first detects the taillights in the vehicles and then classifies their state, producing indicator sequence result GIF (1206). Upon detecting the violation, the AI workflow/pipeline generates an event (1207) that is sent for validation (1208). If the event (1207) is deemed valid after the validation step (1208), it is produced to event map (1209). Using the LIFO/backpropagation (1200) technique, the AI detections (1210, 1212) are validated (1211, 1213) and corrected, in case of wrong AI detections. This validation and correction cycle starts from the last work step (1205) to the first work step (1201), resulting in the workflow/pipeline (1202) to be used in production environments for validating the dynamic events and/or objects of interest (1207). [0153] The figure 2A illustrates the use of AI models (2103) to assist specialists involved in the annotation process on the electronic mobile devices (2102) to obtain the labels for data. When the specialist draws the annotation box (2101) for the object (2100), the AI model (2103) is able to snap-fit to the edges (2104). This allows the specialist to draw a rough bounding box (2101) over the object (2100), without having to painstakingly mark a tight bounding box (2104). This process reduces the manual effort involved in the labeling procedure. This thus reduces the time taken to generate correct labels and leads to training models at the earliest possible. [0154] The figure 2B illustrates the use of AI models (2202) on mobile electronic devices (2201) to assist specialists involved in the annotation process to obtain the crowdsourced labels for data. The AI models are downloaded on the devices (2201) and run directly on the device (2201) to produce annotations (2204, 2205) on the images (2200). These AI annotations (2204, 2205) are shown on the output image (2203) to specialists or auditors to verify and correct. This reduces the effort of human capital required and speeds up the annotation process. [0155] The figure 3 illustrates the use of AI workflows/pipelines, such as vehicle detector workflow (3100) on mobile electronic devices to identify dynamic events and/or objects of interest, in this case, vehicle and trucks (3107, 3112), and automate the process of data collection (3113). The said figure shows the example of AI detecting vehicles, where specifically the AI is looking for vehicles (3107) and trucks (3112) with visible license plates. These models (3101, 3104, and 3109) are downloaded directly from the server on the mobile device. The backend server also provides some rules (3102, 3105, and 3110) on the detections about the height, width, and AI detection confidence (3103, 3108, 3106, and 3111). These rules enable filtering of data and pass only valid needed data. In the first step, a vehicle is detected and post-processing rules (3102) are applied to it. The valid results are passed to the license plate detector model (3104, 3109). When the detections pass the rules (3105, 3110) for the second model, a trigger happens, allowing the mobile device to record data (3113). In this manner, AI models (3101, 3104, and 3109) are able to identify relevant dynamic events and/or objects, in this case, vehicles and trucks (3107, 3112) and automate the process of data collection (3113). Scaling the application yields crowdsourced relevant data for the AI workflows/pipelines for various problem verticals. [0156] The figure 4 illustrates the training system setup, wherein, the training manager service (4102) starts training models when triggered. The Manager Service (4102) is responsible for spawning the training containers/pods (4104, 4105) and communicating to the workflows on the server (4100) about the training status. Using Kubernetes it manages the hardware and schedules training pods (4104, 4105). First, a data worker pod (4103) runs which downloads the data from the DB (4101) collected through the LIFO/backpropagation technique disclosed in this document. The data worker pod (4103) receives the annotations from the server in JSON format and is responsible for transforming them to the correct format required for training the models, like PNG for segmentation-based models, text files containing the bounding box coordinates for detection models. The Manager Service (4102) then spawns the training container/pod (4104, 4105) based on the model type like segmentation trainer or detection trainer or trainer. The trainer module uses K-fold splits of data to train, test and tune the models, to determine parameters for high performing models. The training container/pod (4104, 4105) uses hyperparameter tuning and K-fold validation to result in the best performing model and communicates back to the manager service (4102). The Manager Service (4102) is able to provide real-time updates to the main workflow server (4100) and log the training status. The trained model is then saved in persistent storage (4107) and this information is propagated back to the main workflow server (4100). After A/B testing (4106), the final model is deployed in production to use again in the workflows/pipelines. In this manner, the process of training and deployment of models after A/B testing (4106) is automated and doesn't require assistance from AI experts. [0157] A scheduler service similar to manager service (4102) that spawns training requests for one or more models in a pipeline, where the data collected in the data buckets through the LIFO/backward propagation system automatically triggers these requests when the training data reaches predefined thresholds for the various AI models for the various AI workflows/pipelines. [0158] The figure 5 illustrates the hybrid system of combining human and machine intelligence. Detection of relevant dynamic events and/or objects of interest are performed using AI workflows/pipelines (5100), where a workflow includes one or more AI models working in different parallel and sequential combinations, localizing information from one step to another. The detected events are first validated (5102) by specialists (5101) to filter out erroneous AI events from coming to live event maps (5103). The validated events are sent out to live event maps (5103) for utilization by end customers. This process automates the use of a crowdsourced annotation network to produce dynamic events and/or objects of interest. [0159] The figure 6 illustrates the use of Keda to auto-scale the AI workflow/pipeline (6103) processing Kubernetes pods to handle the amount of data coming into the server. The server obtains the data from various sources (6100). These sources (6100) are a part of the crowdsourced data collection system. The server, as denoted by the C3 system (6101), then sends the data for processing to AI workflows/pipelines using Kafka topics (6102). Initially, the containers/work step pods (6104, 6106, and 6108) are kept to a minimum replication of 1. As the data ingestion increases, the Keda tool which is monitoring the Kafka queues (6102, 6105, and 6107) scales the AI work step pods (6109, 6110, and 6111) to have more replicas. This enables the system to keep processing incoming data and producing dynamic events and/or objects of interest. Once the load is reduced, Keda auto-scales the pods (6109, 6110, 6111) back to their original setting of replicas 1 (6104, 6106, 6108). This procedure thus automates the handling of different load scenarios, allowing to meet the scale of the crowdsourced data coming in at different times through the data sources (6100). [0160] The figure 7 illustrates the storage plan for OSB - One Step Button AI Retraining system. The storage space (7100) is divided into permanent storage (7114, 7119) and temporary storage space (7101). The temporary storage space (7101) is linked with the training request. Upon completion of the training request, the temporary storage (7101) is purged. After receiving a training request, the data worker first downloads the annotations (7117, 7118). These data points are divided into train data (7104, 7109) and test data (7105, 7110) based on train-test ratios (7102, 7103) in the temporary storage (7101). The data worker also splits the annotations for them (7107, 7108, 7112, and 7113). The training sample points or images (7116) are downloaded (7115) in the permanent storage (7114) which follows a cycling policy. Any files which are prepared while training such as interim model weights (7106, 7111), training logs are kept in temporary storage (7101). Once the training process completes, the final model weights (7124, 7126) and training summary (7125, 7127) are put in permanent storage space (7119). The Model folder (7120) stores the model weights for all workflows (7122, 7123). The model config (7121) stores the model parameters (7128). This eases the manageability of models and enables easy deployment of the final trained model weights (7124, 7126). [0161] The figure 8A illustrates the flow of data in the system. As the data is ingested into the system through crowd-sourced data collection, data records (8100) are created. These data records (8100) are sent to workflows/pipelines for annotation. When AI processing is enabled, these data records (8100) get in pending state (8101) to be worked upon by AI models. The data records (8100) are sent to the 'AI annotation in progress' (8103) when the AI system starts working on them. If after a step, the human enabled flag is set, then these detections are sent to validation (8105) to the crowdsourced annotation network. Otherwise, the AI detections from the step are sent to annotation pass through (8104) from which the server sends the data records (8100) to the next step. In another way, if the flag of AI processing is disabled, the data records (8100) are directly sent to the annotation network for processing with the state of the data records (8100) set to annotation pending (8102). In this manner, the crowd-sourced annotation network is used in conjunction with AI processing in a hybrid manner to produce dynamic events and/or objects of interest. [0162] The figure 8B illustrates the flow of data records (8200) in the LIFO/backpropagation technique. Using the annotation network, specialists validate (8202) the AI model annotations (8201). This step is necessary to filter out the erroneous AI results. The correct results are used to produce dynamic events and/or objects of interest (8205) whereas the incorrect AI result is sent to the specialist to annotate (8203) and provide correct labels for the data records. These records are collected to train the model (8204) in the next cycle. [0163] The figure 8C illustrates the flow of data records in the annotation network. The annotation network is divided amongst the specialists and auditors. Here, auditors are the human annotators which have undergone training for the annotation review and have demonstrated their work as a specialist before. A data record that has annotation pending (8300) is first given to specialists for the annotation (8301). This record is then validated (8302) for which specialists provide their judgment (8303). The records processed by specialists (8304) go for a manager review (8305) and/or validation. The records on which manager validation is pending (8308) are then sent for manager judgment (8309). If the consensus of specialists is rejected by the auditors then the record is marked inconclusive (8306). If the consensus is approved then the record shifts to a manager-approved state (8307) and is used as training data in the system. The records for which consensus is rejected by auditors are then given the option to add more specialists (8310) and if not, are annotated by auditors themselves (8311). The data records which are pending to be annotated by auditors (8312) are then assigned to auditors for annotation (8313) and are validated by other auditors (8314). Upon evaluating the manager judgments (8315), if a consensus is reached then data records are shifted to manager-approved state (8316), to be used as training data in the system. If no consensus is reached then the records are marked junk (8317) or inconclusive. Using this consensus on two levels with multiple specialists or auditors and auditors respectively, the system ensures that no incorrect labels are propagated. This enables training models with high-quality labels. [0164] The figure 9 illustrates the use of AI processing directly on mobile electronic devices (9101, 9104) to detect objects of interest, in this example billboards (9100, 9103), and enable crowdsourced data collection by data collectors (9102, 9105) in varied locations. In this manner, the application can scale to capture relevant data for AI processing. [0165] The figure 10 illustrates the flow of data in the AI general workflows/pipelines and the use of the LIFO/backpropagation technique. AI workflows/pipelines comprise one or more AI work steps (10100, 10101, 10102), wherein, the input data is passed to the first step (10100). The positive AI detections (10115, 10116, and 10117) are propagated from one step to the next in the forward pass. The last step (10102) of the workflow/pipeline produces AI detected events (10103). These AI detected events (10103) are then passed to an event validation step (10104) conducted by a hybrid system. This step ensures filtering out the erroneous detections i.e. False Positive AI Detections (10121, 10122, and 10123). The True Positive AI Detections as validated by the specialists or auditors are then released as dynamic events and/or objects of interest (10105). The False Positives AI Detections (10121, 10122, and 10123) are then passed to the crowdsourced annotation network for validation (10106, 10107, and 10108) and annotation (10109, 10110, and 10111) of AI labels using the LIFO/backpropagation technique. The data now flows from the last step (10102) to the first step (10100) ensuring correction of AI detections at the earliest level. The corrected labels from the crowdsourced annotations are then used for training the models and are stored as the training data (10112, 10113, and 10114). The False Positives (10121, 10122, and 10123) from each step are given a higher priority and are sent for correction first to the annotation network. This ensures that the model is improved first to reduce the false positive detections (10121, 10122, and 10123). The Negative Detections (10118, 10119, 10120) here are the cases where AI didn‟t detect any objects in the data record. This is thus the sum of False Negatives and True Negatives. Once the work on False Positives (10121, 10122, and 10123) is completed, these negatives detections (10118, 10119, and 10120) are passed to the crowdsourced annotation network for correction. The corrections supplement the training data (10112, 10113, and 10114) for the models. In this way, the LIFO/backpropagation technique is used to collect training data (10112, 10113, and 10114) while also generating dynamic events and/or objects of interest (10105) in the workflows/pipelines. [0166] The figure 11 illustrates the AI Retraining Engine used for model training in cycles. This represents the advanced techniques used to train the models, specifically, the use of Knowledge Distillation (11102) and Data Sampling through weighted backpropagation (11105). By mimicking the teacher model (11101, 11107), the student model (11103, 11109) learns the important features, resulting in a performance boost. This enables to use of a faster, smaller model (11103, 11109) to provide speedy AI detections at high accuracy. In transfer learning hypertuning (11104, 11110), the model utilizes the past weights as the initialization and experiments with different parameters such as batch size, training schedulers, and more to determine the settings leading to best performance. This permits only saving the data for the current training cycle and reduces the training time and costs associated with it. This process automates the model experimenting and training process, avoiding the need for AI expert intervention. The data weight sampling is performed using weighted backpropagation (11105), where, weights are given to each data point separately. This allows the training process to give more weightage to data points which bring the biggest impact on the model‟s performance. In this figure, the first cycle‟s data (11100) is used to train the teacher model (11101) and then the student model (11103) through Knowledge Distillation (11102). These models are then used to sample data weights on second cycle data (11106) through the weighted backpropagation (11105). Now the second cycle data (11106) is used to train the teacher model in the second cycle (11107) and then the student model in the second cycle (11109) using Knowledge Distillation (11108). The model‟s in the second cycle use transfer learning hypertuning (11104, 11110) on the models from cycle 1 by using them as the base to learn in cycle 2. This process improves the model's performance in this next cycle. The AI Retraining Engine is thus able to improve model performance over training cycles. [0167] The figure 12 illustrates the inference engine for AI processing, wherein the models generated from the training process are used. The use of Knowledge Distillation in the training process results in a smaller, faster student model (12101). This model is used to balance the load using the load switch (12100). When the load is high, a turbo mode enables the use of the faster student model (12101), producing AI detections rapidly to the cloud server (12103) and meeting the high demand. When the load reduces, the processing framework can automatically switch back to the highly accurate teacher model (12102) for producing inferences to the cloud server (12103). [0168] The Figure 13 illustrates the weighted backpropagation technique, which is used to provide data importance weights to data samples. In this case, the model from the previous cycle (13100) is first evaluated on the data for the present cycle (13101). The samples on which AI model from the previous cycle (13100) failed or performed worse (13103) are given more weightage so as to let the AI models learn better. The samples on which the AI model from the previous cycle (13100) performed better (13102) are given less weightage but are included to help the AI learn better features. These sample weights are adjustable as training parameters and thus the experimenting and training process results in an improved model (13104) in this present cycle. [0169] The figure 14 illustrates the flow of the Smart Data Collection (14111), Annotation Network (14114), LIFO/Backpropagation (14113), and AI Training (14115) combined together. [0170] The Smart Data Collection (14111) framework enables crowdsourcing data gathering by using AI on mobile electronic devices (14100). The AI models identify data containing relevant information and start capturing data which is then passed to the server. [0171] The crowd-sourced data obtained from the Smart Data Collection (14111) framework is passed to AI workflows/pipelines (14112) for processing. These workflows/pipelines (14101, 14102, 14103) process the data using multiple AI models and produce AI detected events (14106). [0172] The AI detected events and/or objects of interest (14106) are validated (14104) following the LIFO/Backpropagation (14113) framework. The AI detections are passed to Annotation Network (14114) wherein specialists or auditors (14105) validate and provide corrected labels for the data. The valid AI detections are passed as dynamic events and/or objects of interest (14106) and are available for the end customers to use. [0173] The false AI detections with their corrected labels are used as training data (14107). The training process is triggered based upon the number of data points collected by the LIFO/Backpropagation (14113) framework. [0174] The training process (14108) is completed through the One-Step-Button framework (14115). After the training process (14108) is completed, the models undergo A/B testing (14109) to determine the best-performing model over the previous training cycles. This procedure happens on unseen test data which is used to quantify the model results. [0175] The final model is then deployed (14116) back in the system, i.e. it is updated (14110) in their respective places in the AI workflows/pipelines (14112) and the Smart Data Collection framework (14111). This results in improved AI detections and reduces the work done by the human capital. [0176] Overall, this represents how the process of improving AI models using crowdsourcing techniques is automated and provides continuous delivery in improved AI models while producing AI dynamic events and/or objects of interest in real-world scenarios. [0177] The figure 15 illustrates the use of AI processing on mobile electronic devices for smart data collection. Firstly the user lands on the driver screen (15100) on the mobile electronic device. The detection of the user‟s location (15101) starts a location validator task (15102) and sends the user‟s location to the server (15103) to track the user's location. Based on the location or geographical region, the backend server (15104) returns a workflow/pipeline. After receiving the workflow/pipeline, the application on the mobile electronic device starts the camera service (15105) and streams frames. Upon receiving a new frame (15107), first, the device temperature (15109) is checked. If the temperature is above the threshold, then the frame is dumped (15106). If the temperature is below the threshold, then RAM/compute availability (15110) is checked on the mobile electronic device. If the computing is low then the current frame is dumped (15108). If the compute is available then the frame is prepared for the AI model, which involves resizing and preprocessing it (15111). The AI Model is then executed (15112) on the frame. If no detections are produced (15113), then the frame is considered junk and is dumped (15108). If the detections are produced (15114), these are validated with the rules (15115). If the detections don‟t match with the rules, then the object is considered invalid (15116) and the frame is dumped (15108). If the detections are valid, then the workflow is checked for any further AI models (15117). In the case, of child models, the flow then crops out objects and preprocesses that for the next model (15118). Once the last step of the model has reached and valid AI detections are produced, the frame parsing is stopped (15121) and data collection is triggered (15120). The ten- second video is then recorded and saved (15119). This process of using AI on mobile devices thus allows smart data collection without having the specialist users collect the data manually. This automatic process is thus able to scale massively and enable huge crowdsource data collection. [0178] The figure 16A illustrates the True Positive Detections by AI models. In this example, the AI model is able to detect “Car” objects (16101, 16102) from the input image (16100) and provide the correct labels for them in the output frame (16103). The AI model is confident in its detected results and since the results are correct, this is a true positive AI detection. [0179] The figure 16B illustrates the False Positive Detections by AI models. In this example, the AI model wrongly identifies “Bus” (16201, 16202) instead of „Car” from the input image (16200). The AI model is confident in its detected results on the output frame (16203), but since the results are incorrect, this is a false positive AI detection. [0180] This figure 16C illustrates the False Negative Detection by AI models. In this example, the AI model misses detecting “Cars” from the input frame (16300) and results in the output frame (16301) without any detections. This is a false negative AI detection since in this example the objects are indeed present but the AI result of no objects is incorrect. [0181] The detections of AI models are used to create an event. Since the cost of sending erroneous detections by AI models as dynamic events and/or objects of interest are extremely expensive it becomes inevitable to increase the priority to validate the positive AI detections first. This validation process starts from the last step in the LIFO/backward propagation technique and thus is able to trim out the false positives. [0182] The figure 17A illustrates the flow of data (17100) collected through the workflow/pipeline of detecting seatbelt not worn violation events (17105). In this case, the workflow/pipeline is composed of multiple AI models used in work steps (17101, 17102, and 17103) where each work step localizes information and filters out irrelevant data. In this case, the data (17100) is first processed through (17101). This work step (17101) of Vehicle Detector receives all collected videos and filters out videos that don‟t have any vehicles present. The detected vehicles are then processed through the second work step (17102) of Person Detection, wherein the model only passes the vehicle data which have persons inside. This data is then processed through the third work step (17103) of seatbelt segmentation. The third work step (17103) detects whether a person is wearing a seatbelt or not and produces events in case the person is not wearing a seatbelt. These detected events are passed through an event validation step (17104). The event validation step (17104) filters out the erroneous AI detections and is thus able to produce true events (17105). Since the prospective AI detected events need to be validated (17104) before sending out as true events (17105), it is highly crucial to reduce the quantity at this step. Breaking the workflow/pipeline in work steps (17101, 17102, and 17103) allows this data reduction. This ensures that the specialist or auditors involved in the event validation step (17104) need to do the least amount of manual work in order to produce true events (17105). [0183] The figure 17B illustrates the forward cycle in a workflow/pipeline wherein data from an input data source (17200) flows in the workflow comprising of multiple work steps (17201, 17202, 17203), wherein the work steps are the smallest unit of processing using an AI model. Here, the data goes to the first work step (17201), then to the second work step (17202), and then to the third work step (17203). The last work step (17203) produces prospective events which undergo an event validation step (17204). These valid AI detections are then produced as true events (17205) from the workflow/pipeline for the problem vertical. [0184] The figure 17C illustrates the backward cycle in a workflow/pipeline for a problem vertical. The data from the forward cycle is validated at the event validation step (17300). The false AI detections are then backpropagated from the last AI work step (17301) to the second AI work step (17303) and then to the first AI work step (17304) in the workflow/pipeline. At each stage, the annotation network provides the correct data labels for the false AI detections which are stored in the DB (17302, 1704, 17306) for the respective work steps. [0185] The figure 18 illustrates the use of LIFO/backward propagation technique to gather corrected data labels through the annotation network and automatically trigger training of AI models. [0186] The workflow/pipeline for the problem vertical is divided into AI work steps (18100, 18101, and 18102), where these work steps (18100, 18101, and 18102) use AI models to process data and produce prospective events. These events undergo an event validation step (18103) and the true AI detections are produced as valid events (18104). The false AI detections undergo a backward cycle starting from the last step (18102) to the first (18100). At each work step (18102, 18101, 18000) there is an annotation step (18105, 18108, 18111) respectively, wherein each annotation step (18105, 18108, 18111) produces the corrected data labels which are stored in the database (DB) (18106, 18109, 18112) respectively. These data labels are used for training AI models at each AI work step (18100, 18101, and 18102). Once the collected training data in the DB (18106, 18109, 18112) reaches predefined thresholds the training for the model is triggered (18107, 18110, 18113) respectively. [0187] In this manner, the LIFO/backward propagation technique enables automatic training data gathering for AI models and triggers retraining of AI models. [0188] The figure 19A illustrates the use of optical flow in the smart data collection process. In this case, the optical flow vector remains constant over frames (19100, 19101, and 19102) as no movement is detected, and hence the data gathering process is not triggered (19103). [0189] The figure 19B illustrates the use of optical flow for the smart data collection process. In this example, the vehicles are moving as well as the camera source over the frames (19200, 19201, 19202), which leads to an increasing optical flow vector. This increasing optical flow vector triggers the data gathering process (19203). [0190] Optical flow thus allows to automatically trigger data capture when motion is detected in the scene. This identification of motion ensures that only relevant data is collected and sent to downstream systems for further analysis, reducing the burden on the server. [0191] The figure 20 illustrates the use of different AI models downloaded on mobile electronic devices over different zones or geographical regions. Allowing devices to download AI models from server systems based on geographical area allows the capture of relevant data from the interest areas. In this example, zone 1 (20100) is used for detecting potholes, zone 2 (20101) is used for detecting garbage and petrol pumps, whereas zone 3 (20102), a subzone of zone 2 (20101), is used for detecting seatbelt violations, and zone 4 (20103) is used for detecting billboards. This smart data collection process thus allows further control on the data collected from different geographical areas, providing refined relevant data from the areas, in an automated manner. [0192] The figure 21 illustrates the sampling process of AI models on mobile electronic devices for smart data collection. First workflows/pipelines are downloaded (21100). The camera service is started (21101) on the mobile electronic device and the temperature is checked on the device (21102). The temperature is checked with different thresholds, including below 42 degrees Celsius (21103), at 42 degrees Celsius (21104), at 43 degrees Celsius (21105), at 44 degrees Celsius (21106), and in between 44 and 50 degrees Celsius (21107). These temperature checks then set the sampling rate of the AI models, specifically one second (21113), two seconds (21112), three seconds (21111), six seconds (21110), and ten seconds (21109) respectively for the previously defined temperature settings. These sampling rates control the execution of AI models (21114). Once the temperature of the device exceeds 50 degrees Celsius (21108), the camera service is stopped (21115). In this manner, controlling the sampling rate of the execution of AI models (21114) allows optimizing for the resources available on the mobile electronic devices, ensuring data collection over long periods of time. [0193] The figure 22 illustrates the use of interpolation techniques in an annotation network to determine object locations over video frames. Interpolation uses the detections (22101, 22102, 22111, and 22110) on a start frame (22100) and an end frame (22109) made by specialists or auditors (22112) and is able to determine the location of the objects (22104, 22105, 22107, and 22108) on all the frames (22103, 22106) in between. This reduces the manual effort of the specialist (22112) to provide correct bounding box locations of objects in videos or sequences of frames. [0194] The figure 23 illustrates the AI Retraining process (23101) which produces models for different production environments in one go. In this case, the training data (23100) is used to train models (23105), wherein quantized models (23106) are deployed on mobile electronic devices (23102), small student models (23107) are deployed in edge devices (23103) where compute is low, and complex, highly accurate models (23108) are deployed on large computer servers (23104) for processing of data in workflows/pipelines. This automates the delivery of models for the different scenarios in one go. [0195] The combination of these techniques results in an automated way of training models in a highly cost and resource-efficient, rapid manner, and result in smaller, faster models. [0196] The AI and human consensus based verification network has been architected in such a way as to allow any device with an interactive display to connect to it easily using an app and begin with the verification task. A central server hosts all data which have been churned out by the AI system and are sent to individual devices which are connected to the network. [0197] All these operations comprising of data validation and annotation are performed on a mobile. Since mobile phones are low compute devices and nearly every modern person is equipped with one, it becomes feasible to distribute the data amongst people having a mobile phone. This distribution of data makes it easier to get the data validated and annotated due to the consensus mechanism present. The android application can also be used to monitor the entire process of data being fetched from the server to being sent back to the servers for the AI system to be able to train on it and deploy it automatically. [0198] The Crowdsourced data annotation method described above constitutes of three stages. The first and second two stages involve the validation of AI model predictions on the consensus application and the re-labeling or classification on the canvas application. The third step is basically the majority voting or implicit consensus logic that is applied to all data which is getting reviewed on these mobile applications. The major advantage of having this distributed system on the mobile platform is that the annotation platform is readily available, inexpensive and easily deployable. The continuous majority voting scheme on the system according to the present invention ensures properly annotated data to be churned out for retraining deep learning models. This procedure has been described in detail in the preceding priority application. [0199] Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearance of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. [0200] Throughout the following discussion, numerous references will be made regarding servers, services, interfaces, engines, modules, clients, peers, portals, platforms, or other systems formed from computing devices. It should be appreciated that the use of such terms is deemed to represent one or more computing devices having at least one processor (e.g., ASIC, FPGA, DSP, x86, ARM, ColdFire, GPU, multi-core processors, etc.) configured to execute software instructions stored on a computer readable tangible, non- transitory medium (e.g., hard drive, solid state drive, RAM, flash, ROM, etc.). For example, a server can include one or more computers operating as a web server, database server, or other type of computer server in a manner to fulfill described roles, responsibilities, or functions. One should further appreciate the disclosed computer-based algorithms, processes, methods, or other types of instruction sets can be embodied as a computer program product comprising a non-transitory, tangible computer readable media storing the instructions that cause a processor to execute the disclosed steps. The various servers, systems, databases, or interfaces can exchange data using standardized protocols or algorithms, possibly based on HTTP, HTTPS, AES, public-private key exchanges, web service APIs, known financial transaction protocols, or other electronic information exchanging methods. Data exchanges can be conducted over a packet-switched network, a circuit- switched network, the Internet, LAN, WAN, VPN, or other type of network. [0201] Data transmission or data communication is the transfer of data over a point-to-point or point-to-multipoint communication channel. Examples of such channels are copper wires, optical fibers, wireless communication channels, storage media and computer buses. The data are represented as an electromagnetic signal, such as an electrical voltage, radio wave, microwave, or infrared signal. [0202] The terms "configured to" and "programmed to" in the context of a processor refer to being programmed by a set of software instructions to perform a function or set of functions. [0203] The following discussion provides many example embodiments. Although each embodiment represents a single combination of components, this disclosure contemplates combinations of the disclosed components. Thus, for example, if one embodiment comprises components A, B, and C, and a second embodiment comprises components B and D, then the other remaining combinations of A, B, C, or D are included in this disclosure, even if not explicitly disclosed. [0204] Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the exemplary embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the exemplary embodiments of the invention. [0205] The various illustrative logical blocks, modules, and circuits described in connection with the exemplary embodiments disclosed herein may be implemented or performed with a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. [0206] The steps of a method or algorithm described in connection with the exemplary embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal. [0207] Some or all of these embodiments may be combined, some may be omitted altogether, and additional process steps can be added while still achieving the products described herein. Thus, the subject matter described herein can be embodied in many different variations, and all such variations are contemplated to be within the scope of what is claimed. [0208] All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g. “such as”) provided with respect to certain embodiments herein is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention otherwise claimed. No language in the specification should be construed as indicating any non-claimed element essential to the practice of the invention. [0209] Various terms are used herein. To the extent a term used in a claim is not defined below, it should be given the broadest definition persons in the pertinent art have given that term as reflected in printed publications and issued patents at the time of filing. [0210] Preferred embodiments are described herein, including the best mode known to the inventor for carrying out the claimed subject matter. Of course, variations of those preferred embodiments will become apparent to those of ordinary skill in the art upon reading the foregoing description. The inventor expects skilled artisans to employ such variations as appropriate, and the inventor intends for the claimed subject matter to be practiced otherwise than as specifically described herein. Accordingly, this claimed subject matter includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed unless otherwise indicated herein or otherwise clearly contradicted by context.

Claims

CLAIMS 1. An electronic network and a method of LIFO/backpropagation to verify, retrain, and improve the accuracy of the one or more Artificial Intelligence (AI) models in a pipeline in the shortest time frame using evaluation of the outputs of the AI models starting with the last AI model output in the pipeline going backwards towards the output of the first AI model in the pipeline, where the said electronic network comprising: a crowdsourcing network wherein specialists or auditors and data collectors work in coherence to achieve dynamic events and/or objects of interest and retraining of AI models; wherein the verification/validation of the problem statement / end result / event as well as the annotation is done by a method of consensus among the masses for higher levels of accuracy; an electronic mobile device, which displays the data received from the network and allows specialists or auditors to review the data and correct any wrong data; the electronic network sends for verification AI inferences based on proximity to `end results of an AI pipeline composed of a plurality of AI models; wherein the end results are detection of events / dynamic data, objects of interest; wherein the detections incorrectly marked by artificial intelligence engines are first validated, then corrected/annotated by specialists and auditors for AI retraining and deployment. 2. The electronic network and method of claim 1, wherein the AI annotations are verified and incorrect annotations are corrected by a hybrid system involving specialists and auditors, and true AI annotations result in dynamic events and/or objects of interest for the problem vertical. 3. The electronic network and method of claim 2, wherein the AI annotations from the last model in the pipeline/workflow are sent first for validation and correction using Last to First propagation. 4. The electronic network and method of claim 3, wherein the false positives of the AI detections are given higher priority over the false negatives for validation and correction by the crowdsourced annotation system to ensure that each work step model improves, resulting in the earliest overall improvement for the entire pipeline/workflow. 5. The electronic network and method of claim 4, wherein the corrected data points by the annotation network are collected and used for retraining the AI models used in the pipeline/workflow for the problem verticals, wherein the retraining of AI models is triggered when the collected data points using the LIFO/backward propagation technique reach pre- defined thresholds for the problem verticals, resulting in automated retraining of AI models. 6. The electronic network and method of claim 5, wherein the event of interests are related to detection of relevant information or objects in a vertical of a pipeline/workflow or an event of interest in a vertical. 7. The electronic network and method of claim 6, wherein AI models are neural network models used for tasks involving segmentation, classification, character recognition, object detection, and identification. 8. The electronic network and method of claim 7, wherein the improvement of the AI model is performed by annotation by a specialist or auditor on every incorrect result by the AI model, thereby allowing retraining of the AI models. 9. A system and method of One Step Button (OSB) to retrain and improve the accuracy of Artificial Intelligence (AI) neural network based models and deploy them in varied production environments, the system and method comprising of: a storage space divided into temporary and permanent storage spaces; wherein data and interim training snapshots are stored in temporary storage and final trained models are stored in permanent storage; a scheduler service that spawns training requests for one or models in a pipeline, where the data collected in the data buckets through the LIFO/backward propagation system automatically triggers these requests when the training data reaches predefined thresholds for the various AI models for the various AI workflows/pipelines. a trainer module that trains models in one go for varied production environments, including quantized models for electronic mobile devices, small student models for small compute edge devices, and complex, highly accurate models for large compute server systems. an A/B testing module that tests the trained model with previous iterations and deploys the highest performing model automatically in the processing workflows/pipelines. 10. The system and method of claim 9, wherein the training of AI models is performed using student-teacher relationships, wherein the smaller student models mimic the higher-performing teacher models, resulting in a performance boost for the smaller models. 11. The system and method of claim 10, wherein the smaller models enable a turbo inference mode, in which the smaller student models are used to process data at faster speeds and the turbo inference mode is automatically enabled during high load scenarios and ensure high throughput for AI processing even during high load scenarios. 12. The system and method of claim 11, wherein the annotated data collected through the LIFO/backward propagation method, are given more weightage in the training or retraining process. 13. The system and method of claim 12, wherein the trainer module uses K-fold splits of data to train, test and tune the models, to determine parameters for high performing model. 14. An electronic mobile device to infer on data for crowd sourcing of data and verify the AI detections as part of the annotation network , the electronic mobile device comprises: one or more storage devices having embodied therein one or more routines performing AI processes; and one or more processors coupled to the one or more storage devices and operable to execute the one or more modules, wherein the one or more modules include: a data capture module, to identify and automatically captures the images or videos or region of interest related to problem vertical according to pre-defined rules; a receiving module, which receives the continuously updated dynamic events and/or objects of interest on the electronic mobile device from the capture module; a selection module to select the received images or videos on the electronic mobile device related to problem vertical; an annotation module to let a user annotate and label the selected images or videos on the electronic mobile device; a data storage module, which stores the annotated images or video or series of one or more images or videos with the desired dynamic events and/or objects of interest; a transmitter module, which transmits the AI annotations and the corresponding data to the servers in the artificial intelligence based network. 15. The electronic mobile device of claim 14, utilizes a camera and other sensors available to trigger data gathering after identifying dynamic events and/or objects of interest as required by the problem verticals. 16. The electronic mobile device of claim 15, which downloads from server system and uses one or more AI workflows/pipelines comprising of multiple models in each workflow/pipeline; wherein the workflows/pipelines are executed in parallel and sequential combinations to detect different dynamic events and/or objects of interest. 17. The electronic mobile device of claim 16, which uses Optical Flow to determine if the scene is in motion before the scene is processed using AI processing, optimizing the use of resources and AI inferencing for gathering the needed information for the problem verticals. 18. The electronic mobile device of claim 17, wherein different AI workflow/pipelines are downloaded on the electronic mobile device based on the different geographical locations and are used for identifying dynamic events and/or objects of interest for varied problem verticals in the geographical locations/zones. 19. The electronic mobile device of claim 18, where AI processing is used to enhance the annotations of the specialists and auditors by snap-fitting the drawn bounding boxes to the dynamic events and/or objects of interest, reducing the manual effort for the specialists and auditors in the annotation process. 20. The electronic mobile device of claim 19, where AI processing is used as a first pass annotator and the human annotation network is used to correct the AI annotations, reducing the manual effort for the specialists and auditors in the annotation process. 21. The electronic mobile device of claim 20, wherein the data collected using AI processing feeds to the workflows/pipelines in the system for the problem verticals, and the annotations enable AI model retraining. 22. The electronic mobile device of claim 21, wherein the data collectors and specialists or auditors are divided into classes depending upon the quality and the aptness of the data for relevant dynamic events and/or objects of interest and providing quality annotations for providing a measure of importance for the data to be processed respectively, enabling optimization of resources to for processing of AI model inference in the one or more pipelines. 23. The electronic mobile device of claim 22, wherein both front and back camera sensors present in mobile devices are used to gather data after identifying relevant information of interest. 24. The electronic mobile device of claim 23, wherein one or more device properties are monitored, where in the properties include temperature, storage space, compute resource availability, network bandwidth, to control the sampling rate for executing various AI models for various AI workflows/pipelines, thus optimizing the performance over the electronic mobile device resources and enabling long usage of device for detecting dynamic events and/or objects of interest. 25. The electronic mobile device of claim 24, wherein specialist or auditors annotate objects on scarce number of frames (typically start and end frames) in a sequence of frames or videos, and the position of objects in all the in between frames is interpolated automatically, wherein the interpolated positions can be altered later, thus minimizing the workload required to annotate objects in videos. 26. The electronic mobile device of claim 25, wherein a marketplace of data is available to end users to subscribe for different dynamic events and/or objects of interest for varied problem verticals, and wherein the data collectors can trigger data collection for these dynamic events and/or objects of interest on demand using the scout mode in the electronic mobile device.
PCT/IB2023/050753 2022-01-28 2023-01-28 Crowdsourcing techniques to deploy artificial intelligence systems WO2023144780A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202217587467A 2022-01-28 2022-01-28
US17/587,467 2022-01-28

Publications (1)

Publication Number Publication Date
WO2023144780A1 true WO2023144780A1 (en) 2023-08-03

Family

ID=85328762

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2023/050753 WO2023144780A1 (en) 2022-01-28 2023-01-28 Crowdsourcing techniques to deploy artificial intelligence systems

Country Status (1)

Country Link
WO (1) WO2023144780A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11037024B1 (en) * 2016-12-20 2021-06-15 Jayant Ratti Crowdsourced on-demand AI data annotation, collection and processing

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11037024B1 (en) * 2016-12-20 2021-06-15 Jayant Ratti Crowdsourced on-demand AI data annotation, collection and processing

Similar Documents

Publication Publication Date Title
Chen et al. An edge traffic flow detection scheme based on deep learning in an intelligent transportation system
WO2020192469A1 (en) Method and apparatus for training image semantic segmentation network, device, and storage medium
US11205100B2 (en) Edge-based adaptive machine learning for object recognition
CN108388834A (en) The object detection mapped using Recognition with Recurrent Neural Network and cascade nature
AU2020393199B2 (en) Crowdsourced on-demand AI data annotation, collection and processing
JP2020504358A (en) Image-based vehicle damage evaluation method, apparatus, and system, and electronic device
US11562179B2 (en) Artificial intelligence system for inspecting image reliability
CN113095346A (en) Data labeling method and data labeling device
US11255678B2 (en) Classifying entities in digital maps using discrete non-trace positioning data
CN113807399A (en) Neural network training method, neural network detection method and neural network detection device
US20200050899A1 (en) Automatically filtering out objects based on user preferences
Rashid et al. SEIS: A spatiotemporal-aware event investigation framework for social airborne sensing in disaster recovery applications
WO2023144780A1 (en) Crowdsourcing techniques to deploy artificial intelligence systems
Zhong A convolutional neural network based online teaching method using edge-cloud computing platform
Mosin et al. Comparing autoencoder-based approaches for anomaly detection in highway driving scenario images
US11267128B2 (en) Online utility-driven spatially-referenced data collector for classification
Barra et al. Can Existing 3D Monocular Object Detection Methods Work in Roadside Contexts? A Reproducibility Study
Peng et al. Instance-based dynamic label assignment for object detection
LÓPEZ GONZÁLEZ et al. Tackling uncertainty in mobile computer vision applications
Semenova et al. Moving object recognition system on a pedestrian crossing
Parapurath Govindarajan Masked Face Recognition using Deep Learning and MlOps
Quasar et al. Traffic Sign Detection and Recognition Using Ensemble Object Detection Models
Jagadeesha TIPANGLE: A Machine Learning Approach for Accurate Spatial Pan and Tilt Angle Determination of Pan Tilt Traffic Cameras
Mathiane et al. A Convolution Neural Network Based VANET Traffic Control System in a Smart City
Ren et al. Extraction of Vehicle Trajectories from Online Video Streams

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23706848

Country of ref document: EP

Kind code of ref document: A1