AU2023202418A1 - IntegTechPLUS - Further extensions for distributed block chain systems enabling systemizing and processing of distributed components including with temporal, spatial and or ordinally displaced inputs including data and stream aggregations in fast and closely coupled architectures (especially but not limited to quantum processors and AI learning & de/constructions) with : process model, standard interface needs for implementing and managing various types of processing pipelines with block chain architecture able to also coordinate authenticable calculations. - Google Patents
IntegTechPLUS - Further extensions for distributed block chain systems enabling systemizing and processing of distributed components including with temporal, spatial and or ordinally displaced inputs including data and stream aggregations in fast and closely coupled architectures (especially but not limited to quantum processors and AI learning & de/constructions) with : process model, standard interface needs for implementing and managing various types of processing pipelines with block chain architecture able to also coordinate authenticable calculations. Download PDFInfo
- Publication number
- AU2023202418A1 AU2023202418A1 AU2023202418A AU2023202418A AU2023202418A1 AU 2023202418 A1 AU2023202418 A1 AU 2023202418A1 AU 2023202418 A AU2023202418 A AU 2023202418A AU 2023202418 A AU2023202418 A AU 2023202418A AU 2023202418 A1 AU2023202418 A1 AU 2023202418A1
- Authority
- AU
- Australia
- Prior art keywords
- data
- processing
- block chain
- separate
- learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012545 processing Methods 0.000 title claims abstract description 138
- 238000000034 method Methods 0.000 title claims abstract description 56
- 230000008569 process Effects 0.000 title claims abstract description 30
- 238000004364 calculation method Methods 0.000 title abstract description 33
- 238000010276 construction Methods 0.000 title abstract description 5
- 230000002123 temporal effect Effects 0.000 title abstract description 4
- 230000002776 aggregation Effects 0.000 title abstract description 3
- 238000004220 aggregation Methods 0.000 title abstract description 3
- 230000006870 function Effects 0.000 claims description 56
- 238000013499 data model Methods 0.000 claims description 25
- 238000004422 calculation algorithm Methods 0.000 claims description 13
- 238000012549 training Methods 0.000 claims description 13
- 238000012360 testing method Methods 0.000 claims description 12
- 238000012546 transfer Methods 0.000 claims description 12
- 238000011161 development Methods 0.000 claims description 11
- 238000005516 engineering process Methods 0.000 claims description 10
- 230000001105 regulatory effect Effects 0.000 claims description 10
- 230000001276 controlling effect Effects 0.000 claims description 7
- 238000004088 simulation Methods 0.000 claims description 7
- 230000007246 mechanism Effects 0.000 claims description 6
- 230000009471 action Effects 0.000 claims description 5
- 238000004891 communication Methods 0.000 claims description 4
- 238000011160 research Methods 0.000 claims description 4
- 238000003860 storage Methods 0.000 claims description 4
- 230000009466 transformation Effects 0.000 claims description 3
- 238000012550 audit Methods 0.000 claims description 2
- 238000013523 data management Methods 0.000 claims description 2
- 230000015654 memory Effects 0.000 claims 4
- 238000001303 quality assessment method Methods 0.000 claims 1
- 238000013461 design Methods 0.000 description 17
- 239000013598 vector Substances 0.000 description 15
- 239000011159 matrix material Substances 0.000 description 11
- 238000007792 addition Methods 0.000 description 10
- 230000018109 developmental process Effects 0.000 description 10
- 238000004519 manufacturing process Methods 0.000 description 10
- 238000006073 displacement reaction Methods 0.000 description 9
- 238000007781 pre-processing Methods 0.000 description 9
- 230000001934 delay Effects 0.000 description 8
- 238000012423 maintenance Methods 0.000 description 8
- 238000007726 management method Methods 0.000 description 8
- 238000004458 analytical method Methods 0.000 description 7
- 230000008901 benefit Effects 0.000 description 7
- 238000012790 confirmation Methods 0.000 description 6
- 238000001514 detection method Methods 0.000 description 6
- 239000000047 product Substances 0.000 description 6
- 230000004044 response Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000036541 health Effects 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 230000033001 locomotion Effects 0.000 description 4
- 230000004224 protection Effects 0.000 description 4
- 230000003993 interaction Effects 0.000 description 3
- 238000013508 migration Methods 0.000 description 3
- 230000005012 migration Effects 0.000 description 3
- 230000001172 regenerating effect Effects 0.000 description 3
- 230000001550 time effect Effects 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 238000013475 authorization Methods 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 238000012512 characterization method Methods 0.000 description 2
- 230000001010 compromised effect Effects 0.000 description 2
- 230000004438 eyesight Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000007620 mathematical function Methods 0.000 description 2
- 230000004630 mental health Effects 0.000 description 2
- 238000004886 process control Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 206010010144 Completed suicide Diseases 0.000 description 1
- 241000718530 Cryptoses Species 0.000 description 1
- 208000032538 Depersonalisation Diseases 0.000 description 1
- 102100034362 E3 ubiquitin-protein ligase KCMF1 Human genes 0.000 description 1
- 101000994641 Homo sapiens E3 ubiquitin-protein ligase KCMF1 Proteins 0.000 description 1
- AHTYDUCWNMCQQP-UHFFFAOYSA-N [3-(5-fluoro-2,4-dioxopyrimidin-1-yl)-2-methylphenyl] hydrogen carbonate Chemical compound CC1=C(OC(O)=O)C=CC=C1N1C(=O)NC(=O)C(F)=C1 AHTYDUCWNMCQQP-UHFFFAOYSA-N 0.000 description 1
- 230000004308 accommodation Effects 0.000 description 1
- 230000035508 accumulation Effects 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000001447 compensatory effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000001143 conditioned effect Effects 0.000 description 1
- 235000009508 confectionery Nutrition 0.000 description 1
- 230000002089 crippling effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000009189 diving Effects 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000012010 growth Effects 0.000 description 1
- 231100001261 hazardous Toxicity 0.000 description 1
- 230000003116 impacting effect Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 238000013439 planning Methods 0.000 description 1
- 238000004321 preservation Methods 0.000 description 1
- 230000002062 proliferating effect Effects 0.000 description 1
- 230000005180 public health Effects 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000012358 sourcing Methods 0.000 description 1
- 230000005477 standard model Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 238000002560 therapeutic procedure Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000014616 translation Effects 0.000 description 1
- 238000012384 transportation and delivery Methods 0.000 description 1
- 230000017105 transposition Effects 0.000 description 1
- 238000011282 treatment Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/21—Design, administration or maintenance of databases
- G06F16/215—Improving data quality; Data cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/30—Creation or generation of source code
- G06F8/36—Software reuse
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3003—Monitoring arrangements specially adapted to the computing system or computing system component being monitored
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/50—Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/60—Protecting data
- G06F21/64—Protecting data integrity, e.g. using checksums, certificates or signatures
- G06F21/645—Protecting data integrity, e.g. using checksums, certificates or signatures using a third party
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L9/00—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
- H04L9/50—Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols using hash chains, e.g. blockchains or hash trees
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Quality & Reliability (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
- Electrically Operated Instructional Devices (AREA)
- Debugging And Monitoring (AREA)
- Computer And Data Communications (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
- Error Detection And Correction (AREA)
- Image Analysis (AREA)
Abstract
Further extensions for distributed block chain systems enabling systemizing and processing of
distributed components including with temporal, spatial and or ordinally displaced inputs including data
and stream aggregations in fast and closely coupled architectures (especially but not limited to quantum
processors and Al learning & de/constructions) with : process model, standard interface needs for
implementing and managing various types of processing pipelines with block chain architecture able to
also coordinate authenticable calculations.
Description
IntegTechPLUSwd AUSTRALIA Patents Act 1990
Description
REFERENCE TO RELATED APPLICATIONS Algo: separate Australian provisional patent IP#2023900667 ArrProc: separate Australian provisional patent IP#2023900576 IntegPplz: separate Australian standard patent IP#2023201205 IntegTech: separate Australian standard patent IP#2023201021 AsSecPAI: separate Australian provisional patent IP#2023900320 ConfidOrder: separate Australian Provisional patent IP#2023900226 PoA: separate Australian Provisional patent IP#2023900197 TranVec: separate Australian Provisional patent IP#2023900144 Heap: separate Australian Provisional patent IP#2023900027 ClarZ: separate Australian Provisional patent IP#2023900010 C-oops: separate Australian Provisional patent IP#2022903846 EQoStaple: separate Australian Provisional patent IP#2022903800 PAL: separate Australian Provisional patent IP#2022903799 ActiPALQ: separate Australian Provisional patent IP#2022903798 ParCent: separate Australian Provisional patent IP#2022903608 CleanStaple: separate Australian Provisional patent IP#2022903460 DevBauxSoak : separate Australian Provisional patent IP#2022903444 DevAuxSoak : separate Australian Provisional patent IP#2022903315 QMake-GenPofAdd : separate Australian Provisional patent IP#2022259783 ShyAuBotlntX : separate Australian standard patent IP#2022228224 DaoCanRock :separate Australian standard patent IP#2022228135 ShingleMake :separate Australian standard patent IP#2022224795 AuMake : separate Australian standard patent IP#2022215299 AuTic8 :separate Australian provisional patent IP#2022902158 ProTic8 :separate Australian provisional patent IP#2022901447
QMake: separate Australian standard patentIP#2022209202 DigEMake: separate Australian standard patent IP#2022209201 Optilso: separate Australian standard patent IP#2022205250 Voxerve: separate Australian provisional patent IP#2022901956 BACdiv: separate Australian standard patent IP#2022201926 DetPatDiv : separate Australian standard patent IP#2022203235 DigiKeyP : separate Australian provisional patent IP#2021901862 ParkCare: separate Australian provisional patent #2021902928 H4Z : separate Australian standard patent IP#2021261831 FunjMakBrake : separate Australian provisional patent #2021901953, Standard patent #2022202704 DigETrace : separate Australian provisional patent IP#2022900156 CGro : separate Australian provisional patent IP#2022900489 AuBotlntX : separate Australian provisional patent IP#2022900696 DigiKeyPLUS : separate Australian provisional patent IP#2022900409 MakTrck : separate Australian provisional patent #2021902872 MkMkT: separate Australian provisional patent IP#2021904151 MkMkTtrailer : separate Australian provisional patent IP#2022900410 SpecQBook separate Australian provisional patent #2021902885 TickTraceE separate Australian provisional patent IP#2021901140 JicCrack(TT+) : separate Australian provisional patent IP#2022900980 DaoCon: separate Australian provisional patent IP#2022900007 TickerTag : separate Australian provisional patent #2021902946 SafeXShare: separate Australian standard patent IP#2021900281 PartnerinTime: separate Australian provisional patent IP#2021900732 PredEPrev : separate Australian standard patent #2021218217 HoHoBal: separate Australian provisional patent IP#2021903942 DigEMed : separate Australian provisional patent IP#2022900074 ProTic8 : separate Australian provisional patent IP#2022201447 Shyny :separate Australian provisional patent IP#2022900769 EAllIn separate Australian provisional patent IP#2022900883 ECool separate Australian provisional patent IP#2022900770 QMo separate Australian provisional patent IP#2022900256
Enco : separate Australian provisional patent IP#2021900934 FlowMake : separate Australian standard patent IP#2021286396 GlobMake: separate Australian provisional patent IP#2021904017 SourceMonitor : separate Australian provisional patent IP#2021900153 SauceMonitor :separate Australian innovations patent IP#2021104349 TickTrackPLUS :separate Australian standard patent IP#2021232845 TranspairPLUS separate Australian standard patent IP#2021236587 SafeEShare : separate Australian innovations patent#2020100425 DiagML : separate Australian innovations patent#2020901672 PCMF : separate Australian innovations patent#2020902071 IncentiHealth : separate Australian innovations patent#2021103255 BAC : separate Australian provisional patent#2020902146, innovations patent#2021103348 DetPat : separate Australian standard/divisional patent IP#2021202215 (prioritized by IP#2018100540) AN ONLINE TOOL ALLOWING COMMERCIAL COLLABORATIONS...: separate Australian innovations patent#2018100747
FIELD OF THE INVENTION Embodiments of the present invention are in any field of distributed information system processing but would be more often useful where one or more stages or paths of computation potentially include large array operations that need to be processed as fast as possible (especially where processing either now or in the future might occur on quantum processor devices or other fast platforms) where it is necessary to take additional steps in order to take advantage of processed staging information that is needed to deal with staging and which can also cater for other signal even delivery delays on significant inputs into critical calculations including where there is 'live' sharing of inputs producing outputs which are sensitive to those inputs and or delays in particular inputs, and or where there are other significant timing constraints on coordination of processing of the data to or from other locations or stages of calculations relative to other necessary data, and or where more distributed processing of those one or more stages or paths across devices necessarily involves communicated movement (potentially with undesirable delays) of processed data to or from any of these other location/s, and where any of this coordination is appropriately implementable by block chain techniques.
Generally this would include but is not limited to, staged matrix processing where speed is critical and successful outcomes are dependent on possibly large amounts of data involving one or more of : multiple live coordinated, staged or series aggregated potentially multi-dimensional calculations of critical or essential vectors or items of data including data produced by Al learning.
This specification details a design for the level of robust data controls required for high value Al learning related to business models and arrangements that impact people in potentially devastating but also highly beneficial ways. To that ends it includes example specifications to enable just such a data model that in normal circumstances would be considered unpatentable ie a prior art summary work contract data model that could be usefully adopted as the basis of an international standard for managing remuneration, conditions and performance for people's work under contracts including the special relationship called employment (More "safety" related things could also have been used but it would not have as universally appreciated unfortunately). It also includes a device based interface specification that has not yet been claimed.
For the level of robustness needed to enhance such high impact data models Al learning needs to be carefully controlled. This particularly includes the quality and biases of the data it learns from. For example learning from contracts in less developed countries wont be sufficient for industries in advanced economies with the benefit of union collaboration and regulations seeking equity as well as efficiency.
Thus this specification includes nuances to a previous extension IntegTech (IP#2023201021) which included ParCent (IP# 2022903608) architecture design specification. This specification is a further addition to the ParCent and IntegTech models.
THe specification as a patent of addition contains
• Full inclusion of the specification of some recent provisionals:
o ArrProc (IP# 2023900576) and o Algo(IP#2023900667) as well as * two as-yet-unasserted bibliographically associated applications as prior art
o DigETrace(IP#2022900156) that includes a device o An extension of PartnerinTime(IP# 2021900732) being an already asserted example data model thought requiring the types of protection targeted by other parts of this specification but not the PartnerinTime specification itself; and • A third recent as-yet-unasserted provisional specification VOXserve (IP#2022901256) - here also
asserted as part of a total system and able to illustrate an example of actuation of a business model deserving robust data controls in both operation and in learning scenarios.
The ParCent architecture was designed to systemize enhancements to raw peer-to-peer processing in order to better take account of spatial displacements, with system processing enhancements being implemented for each known spatial displacement. That is it enables solutions to a particular complex problem for a particular vector (possibly only virtual) of possible spatial displacements.
In line with previous exploration of the need or desirability of possible standards supporting better and more efficient implementation and sharing of systems, QMakeCtrl (IP# 2022259783) specified standard declaration structures that included among other characteristics maxallowablelagtime (mseconds) or other variable/s proxying for required poll frequency for any or all of models, datapoints and variables. In that specification it was further suggested they could be advantageously maintained as Standard Model DigEStructs with standardized Model DigEStruct functions for maintaining distributed models.
ParCent, looking more closely at the maintainablility of distributed systems suggested that maintainers and developers should have histories about operations, devices and modules, and therefore proposed these be accomplished via a common developer's environment (eg DigEOps) based on DigEStructs and sets of DigEfunctions related to systems maintenance and development functions (this grouping being proposed as the right place to add structure for implementing pipelines in a standard way).
ArrProc (IP# 2023900576) also suggested certain systemizations for frame of reference translations as well as array processing (but did not include time delays within that processing specification) that would most sensibly be embedded into DigEOps.
Algo (IP#2023900667) likewise specified technique and structures for maintaining and choosing algorithms based on access and management of technical sharing structures that had previously been developed in Heap (IP#2023900027) as generalized structures for sharing knowledge. It specified a DATAREF grouping of DigEStructs. As an aside for completeness TranVec (IP# 2023900144) dealt with variations in specification interpretation such as units, scaling, offsets etc, principally for improving prospects of interoperability also using DATAREF DigEStructs.
This specification IntegTechPLUSwd extends the above (ParCent, ArrProc and Algo), by explicitly providing information required for the purpose of systemizing temporal and ordinal displacements within those technical structures so that subsequent algorithms can assemble signals/data into single processing stream/s. Although the extension references DATAREF managed structures from the Algo specification it is likely to be more preferred to characterize them as belonging to the DigEOps grouping on account of being more often specific to the underlying hardware rather than the application (though it may not always be irrelevant to the application).
The model management environment of QMakeCtr (IP#2022259783) already discusses a possible "dingle"(an incorrect autocorrection typo. It was meant to be "dongle") device implementation for managing distributed devices which means it would be capable of managing component sub implementations across those distributed devices. This converges with the recommended standardization of a DigEOps system proposed in ParCent (IP#2022903608) for the explicit allocation of parcels of processing between distributed but part-centralised functions on different devices, with that now also being proposed to maintain pipelines and also fits many potential new commercial applications that require processing of individual's data to produce useful outputs using modules learned from mass populations.
In line with all these functional relationships and because of the widely different and potentially significant applications benefitting in catering to them with this proposed extension to the DETPAT invention set, it is proposed as not inappropriate to fit IntegTechPLUSwd as a further patent of addition to the underlying DETPATDIV divisional (IP#202203235) on the basis that the original DETPAT design included its use in quantum control designs, did NOT exclude further RELDescriptors in the listing of Minimum or essential list of RELDescriptors in the Algo specification (IP# 2023900667) but having failed to explicitly define the use of some of them through first extension integTech (IP# 2023201021), ought be extended.
Being just a further design consideration it may draw contention as an unpatentable business method (that is functionally useful to the process of designing software and hardware, rather than necessarily requiring some further new piece of hardware) Nonetheless one is included on the understanding that it may eventually not be claimed or perhaps could be transferred into a divisional. It is described here due to its possible contribution on safer and equitable automation and information development outcomes including to explicitly encourage higher levels of standardization projects.
It is therefore proposed as not inappropriate to assert this specification so that it can also be included as a patent of addition to assertions from the family of DETPATDIV 2022203235 and possibly even DETPAT 2021202215 (if its future includes extension by IntegTech: separate Australian standard patent IP#2023201021 and overlaps with this specification have remained sufficiently excluded). Should functions from this specification implementable in data structure be seen as unpatentable business methods, it is hoped that its potential to increase data quality for Al learning encourage standardization catering to public interest and equitable automation and information development outcomes.
This is a further extension to the invention for implementing consistent asynchronous, non-linear, localized, state-driven and/or condition-based procedurality in distributed systems (and enabling safer application of Al learning).
The entire disclosure of the following referenced application is hereby incorporated in its entirety herein:
Algo: separate Australian provisional patent IP#2023900667 ArrProc: separate Australian provisional patent IP#2023900576 DigETrace : separate Australian provisional patent IP#2022900156 Voxerve: separate Australian provisional patent IP#2022901956 PartnerinTime: separate Australian provisional patent IP#2021900732
This specification with the above covers a method, data models and data definitions and definition of processing models for proposing, systemizing and processing additional distributed sets of component, temporal, spatial and or ordinally displaced vectors of data including data and stream aggregations within authoritatively implemented technical processing in fast and very closely coupled architectures (especially but not limited to quantum processors and Al constructions) using models to encourage new deconstructions for more valuable and flexible coordination of independent streams of processing and at processing rates able to take account of predicted access lags using a distributed device pipeline systems (on-board or on other devices) configured for the particular authenticable calculations and data being streamed
Where the ParCent architecture enables solutions to a particular complex problem for a particular vector of possible processing spatial displacements, the IntegTechPLUSwd invention uses the same principle for any number of these and other dimensions including time which has special restrictions.
Where the combined extended model takes account of time, being itself one of these dimensional characteristics it must provide sufficient structure to take account of certain restrictions. In at least one case (the 'realtimeline'), those restrictions can be of particular consequence. In some domains the physics of an input relating to time that is relocated into the realtimeline being combined with other streams or time based inputs (and incidentally also non-time series inputs) must be preserved, even if implementations allow matrix calculations to include other inputs without that restriction.
Specific designations are required for this :
• Same Timeline Inputs - Some realtimeline situations must be resolved using one or more particular inputs from the set of inputs where these are exclusively from the same timeline (eg Collision detection, travel inputs of two bodies in current focus) even if it means impacting the whole process due to one of those realtime inputs being its slowest component
• Changeable Timeline Inputs - Others situations can be resolved using inputs from any allowable timeline using pre-processing of inputs including subcomponent inputs according to the subcomponent's possible: o limit of their life (health, reality, truth, NOTexpiration) and o fit and flexibility (repeated, stretched, filtered,...)
That is, the timeline relative to the realtimeline can be slipped or even changed (eg edited, hastened, recoloured, translated, transposed, disguised etc, possibly also via matrix processing).
If there is no objective related to truth there may also not need to be an expiration of a particular component (think then of both : Imagined Virtual Worlds and/or Garbage-In-Garbage Out).
Both scenarios imply a status of 'Not Allowed'for some particular input components in some timelines of some particular types of situation.
The designations for a type ofsituation are domain specific and may conceivably include such administered flexible and conflatable designations that might be being created for the purpose of:
• controlling a live employment workflow or manufacturing system with appropriate safety
• controlling a finance crediting/debiting system with appropriate financial safety
• controlling a simple general gate system for a particular scale of movement (subatomic v safety training check-in queue)
• embedding of a rendition of a person's avatar in a personalised experience as part of a safety training or mental health offering to clients (see AuMake : separate Australian standard patentIP#2022215299).
For example :
"Simulation,Decade1950s,TruthOnly,Australia", compared with one called "Simulation,Decade202Os,TruthplusWeatherTwoDegreesHotterTemperature,SydneyEnvirons". In each case truth has to be both somehow transparently understood as being within some agreed limit by participants in the context (egs as above examples) and has to be available to an extent that is both sufficient and pertinent to the possible projected inputs that are to be embedded by value proposition and or choice.
Similarly, the processing needs for processing of inputs that are observations of realtime events or objects for the purpose of collision detection or process control are quite different to the needs of processing and data constraints when superimposing traces into a created artwork.
Same Timeline Inputs
When a particular timeline in a problem is necessarily locked, variation across inputs does matter in some cases spectacularly and even catastrophically.
A delay in one or more of the inputs sampled and rendered into a control algorithm that coincides unfavorably with the frequency response of the control algorithm can produce output instability. While increased sample settling time can reduce the likelihood of this, valuable time may be being lost in what could be a much more beneficial response allowed to occur quickly. (Think of a person at the beach who is afraid of getting pummeled by strong wave action. In the water he is very careful to wait until just the right moment before ducking under the waves. If one day he sees a very fastjet sky approaching him on a path of imminent collision but as per his previous experience waits until the usual just-the-right-time with respect to an approaching wave before diving down as deeply as he can, he could lose too much of the time he really needed to dive deeply enough to avoid being hit or sliced by the jet-ski's motion. He would be better off taking the dive with more attention to the timing of the jet ski (ie immediately, or waiting only until just before it runs over him instead of for the wave to break) and try to make some other compensation for the wave break in choosing the direction of dive or his resurfacing route).
In such timing priority situations, rather than conservatively allowing a minimum multicycle settling time for all input digital processing components limit processing of a critical calculation (to reduce the likelihood of digital combinations of a component's lag manifesting as a system instability), the architecture needs to be designed to allow faster processing rates on account of the lags being systemized by use of a pipeline calculation system, and possibly also extra compensatory control (eg based on second and further higher order affects) that can take account of what in some cases will have just been signal noise. It could then just cancel any false alarm "Collision Imminent" signals it may have begun to proliferate. The extra design interface in this specification for this situation will be called the "WaveSnap" & "WaveSnap-pipeline" stream processing interface.
Changeable Timeline Inputs
In recognizing that delay time effects of a particular input in many cases may be irrelevant or an allowable function of the implementation (ie may not be a problem for many, ever or until something changes in the combined system that introduces an input delay somewhere that cant be tolerated), it is proposed that systemizations for managing it should operate in addition to and in line with systemizations catering for variations in spatial data (ie ParCent ip# 2022903608) and specification data (ie TranVec IP# 2023900144).
Standards to increase streaming and data interoperability
Streaming of timebased images is already advanced technology with standards applying. In the interest of efficiency of resources and opportunities in health as well as in the automation of any control system there is a need to be able to apply the same safe/appropriate-processing efficiencies and robustness to other series data, including where Al may be being or projected to be used to distil knowledge into the processing that may create a potential upgraded product/system or new set of control parameters.
Therefore, in contemplation of new streams of knowledge being used it is likely to be beneficial to standardize presentation of non-time variables on account of the value of their applicability (ie common use of standardized series of something <non/less-time> such as frequencies, 3D grid references for space/air/sea/landforms, 2D/3D grid references for heavenly bodies, radial location map, spatio functional maps of the human brain, spatio-functional maps of the human face, 3D/radial screenviewspace, plotlines etc).
The brevity v flexibility of nomenclatures ultimately depends on
• what is made most available most quickly (probably via Al learning and deconstruction)
• what needs to be accessibly managed for the purpose of management of the standard (if it
already exists or is yet to exist) • likely/possible implementations, especially where transforms already apply (eg LaPlace), or new
forms of mathematics dealing with phenomena such as gravitational (GRAVITYSUBSCRIPT) or strong force (STRONGFSUBSCRIPT) vector variations become complexly relevant to mission critical distributed calculations in the same way that delays and other time effects here are being catered for.
WavSnap structures set up for fast processing rendered in any processing environment that is compromised and or made more complex by an immoveable effect and or variations in time effects could also be catered for by further extensions.
Processing Opportunities Example 1(TIME dimension, Finance)
Interest calculations and DCF Spreadsheets implement time series conflation calculations in a large variety of well understood ways. Implementation of WaveSnap specifications referencing standardized Dictionary definitions would allow quantum and other fast processing architectures to speed up large processing batches to use less energy through specifically tweaks optimizing data movements.
For example, in distributed applications running on generalized array processing environments (eg clouds) it would be advantageous to avoid external device accesses refreshing input data that is already sufficiently recent. Since block chain structures contain date stamps, it may prove useful to 'institutionalise', automate or streamline the recency-check on the most-accessible cloud copy of a particular data item by including the allowedmaximumage of an item of data in the data definition (see Heap IP#2023900027) of the data item being used. A cloud-based on-line service processing environment may decide it wants to offer customers better rates based on each customer's periodic asynchronous service usage characteristics (max/average network load, priority nomination, min bank balance, etc...). As a specific example, a bank running a cloud-based on-line financial processing environment may decide it wants to offer customers a more responsive interest rate on their actual account balances at the time interest is being computationally applied. The rate set would be stored as a set of values, each corresponding to a balance range, which would need to be checked as part of the calculation.
For 24 hour operating international banks, exchanges or trading facilities higher resolution variable interest might remain an unaffordable opportunity that reduces their service client base. In the example above, if these new high resolution account balance interest rate accounts were required to be processed in the same streams of processing as processing of updates on older accounts with lower resolution rates (eg daily on yesterday's COB balance or monthly on Day 1 balances either of which can fairly easily tolerate hours of delay between possible batch processing components), a lot of unnecessary account balance check processing might need to be unnecessarily initiated by the batched processing. This wastes energy and time.
On the other hand, in a block chain environment each account balance value would have its own universal date time stamp. If the lookup of the current account balance part of the interest update calculation included a fast accessible reference to the relevant allowedmaximumage for every account being updated, a lot of processing would be avoided. The age of any available local copies of the data could be judged for accounts used both locally and recently as sufficient and the bank may be saved from having to check so many balances including those operating that day but potentially being debited by large amounts from the other side of the world.
Rather than including the allowableage check in compiled application processing, the check against allowedmaximum-age of a particular operand at a node in a calculation (ie the check of each of possibly several items of data being processed through a simple or complex obtain/transform/arithmetic-op/conversion/conflation/combination/iterative series & residual replacement or any other processing operation defined for the Algo node when implemented and informed by its ordered component operands and compiler implemented instructions for further processing steps and any virtual sub-nodes it has added). In this case the block chain of the calculation needs to be compared with the age of the fastest and latest locally available account access data (using a new item of "allowedage" information set into the definition of the destination type and the additional block chain structure metadata by the compilation process) In this way sufficiently fresh data can be exploited using a different and shorter delay pipeline pre-processing path in a more efficient way so long as the device is able to have a pipeline pre-processing system compiled and the parsing compiler for the hardware running the calculation is implemented according to a design conforming to the WavSnap-Pipeline pre-processing functions. This can increase throughput of the update account interest operation either based on a permanent pipeline delay coordination being implemented (longer than some worstcase fetch time) or based on a some pre-configured adjustable average delay (say regularly calculated from the average proportion and time of worst case fetches for a particular period of processing in a particular location).
Continuing the banking example, the hardware pipeline pre-processing would decide if the operand was recent enough. If it wasn't and the architecture allows it, the scheduling process can place the whole account update further back in the pipeline according to the expected time for making the balance check in order to avoid processing wait time at the node of the operands involved in the completed node instructions. If it was sufficiently recent it would be scheduled immediately or with only the delay coordination necessitated by the shorter path (and/or according to other inputs eg priority by the scheduling process). If the allowable latency in the currency of the interest rate operand is greater than the age of the locally available balance check for an updating account, the process would use the local copy and not need to include an network lagged external device lookup at all.
Processing Opportunities Example 2 (TIME dimension, Work Environment)
The data integrity needs for processing work hours, conditional payment supplements and Al learning for the purpose of employee wages cost minimsation are proposed as indicative of the needs and public risk profile being catered for by the design of this invention. Further design details relevant to a more useful device implementation in personal employment that was not already defined by the provisional specification PartnerInTime (separate provisional patent IP#2021900732) is herein included as additional data model and specified coordinations with other systems, including where the original PartnerinTIme specification may be implemented in another jurisdiction, and also allowing interaction using alternative digital and/or advanced communication technologies. Summary thereof: • Specific mechanisms sufficient to a jurisdiction for safety, privacy and consistency including
proformas that trigger production of permanent records assuring consistency across versions and different functions of different systems; • Assumed expanding use of existing and new data models within new structures able to easily
translate into hardware and software architectures appropriate to advanced digital technologies such as 5G, 6G, ... loT distributed ledger block chains, tags, archives, coding systems (eg QR codes) & the like; • Specific tailored data models implemented on devices (eg QMake: separate Australian standard
patent IP#2022209202, DigEMake: separate Australian standard patent IP#2022209201) exploiting the ability of digital devices to automate recorded input as well as context-triggered coordination between "components" (staff, product and environments) within defined standardized workflows as directed by productivity, regulatory, cultural and ethical needs. Most of these exist as prior art and will continue to increase to include recordings and measures as diverse as : o Detection/identification (person/device : of a worker, service/maintenance staff member, visitor, equipment item, work-in-progress (WIP) container or tag, item of production or piece tag, o Simple measured datapoint : eg temperature, chemistry, radiation profile, concentration, weight, colour, chemical detection, intensity, rate, dose/portion-size, duration, position, length, setting, direction, orientation, speed,... o Complex datapoints: eg Traces, Images, Sounds, Sample /reference ID & type details,.. o Time/date o Trace/image/Sound/Sample analyses o Document o Action o Choice/preference/Decision 0 ...
• Standard implemented operation context sharing and maintenance structures as per:
o IntegTech :(From Heap specification) DATAREF DigEStructs and DigEfunctions o IntegTech :(From ParCent specification) DigEOps as a common developer's environment (eg based on DigEStructs and sets of DigEfunctions related to systems maintenance and development functions) o Not Specified separately anywhere: AWorkManagementSystem APPLICATION DigEOps structure and its associated functions (eg DigEMed) • Interface to structures already specified in PartnerInTime (separate Australian provisional
patent IP#2021900732) • Unspecified further steps and components developed after this specification allowing the safe
and acceptable sharing of data, resources and approaches related to employment, productivity, accountability and associated regulatory, cultural and ethical needs in potentially multiple jurisdictions, that includes the following business model provisions : O Home Jurisidiction: The regimen being proposed is called DigEProd for managing activities in at least one named home jurisdiction and one or more establishments using one or more named data model sets; O Peer Model : There is assumed to be existence of or preference to create a set of standard representations as envisioned and agreed by employers, regulators and other contributors according to regulatory specifications in pursuit of common national interest benefits related to productivity. This would include standard and allowed variations of Work Site Representations; o Data model architecture being devised and regulated to assure consistency of all component transactions between : • employed individuals (envisaged to be the same as, similar to or some cutdown variety of AsSecPAI (separate Australian provisional patent IP#2023900320) or an Optilso (separate Australian standard patent IP#2022205250) device; and • named organizations (including even non-incorporated organizations unless regulated not to) implemented according to regulated variations of DaoConRock invention (separate Australian standard patent IP#2022228135) and including where there may be a need to accommodate non-specified individuals with such a device as per that same specification (by holding the device present). o Accountability : as implied above and where automations and safe work regulations justify high accountability, it is assumed there is preference to create a standard essential set of minimum items for positively identifying an individual within a jurisdiction (eg <Jid><id> being a jurisdictionlD and an individual ID) and a standard workaround identification in a jurisdiction where there may be a lack of other identification or no technology present (eg <JidlidlookupListlD><lookup list position> setup for the DaoCon) and that this may or may NOT be sufficient for certain interactions and therefore may involve additional authorities and or additional steps or mechanisms being invoked. All such mechanisms would be designed as described in separate Australian standard patent IP#2021202215 ("DetPat") and divisional/s thereof (in software and/or hardware) with assurances sufficient to the situation (See : DaoConRock : separate Australian patents IP#2022228135 and DigEMake: separate Australian standard patent IP#2022209201) including for : • interactions to automatically and atomically (ie the action can't be half done or removed) issue a logged confirmation dialogue for input by the triggering actor via any of a keyboard password or mobile device validation or a verbal authority recorded by an appropriately near microphone device that might be operated by a pushbutton or already triggered to record, and with the desired control operation even optionally gated by Al voice or image verification or similar accountability controls.
• for particular authorization workarounds if regulations or safety policy does not allow unspecified immediately available authorities to operate, • adjustable implementation of more onerous confirmation dialogues, workarounds or escalation workflows with authority protocols to prevent potentially irresponsible workarounds, " continuity of wear detection, • voice identification/recording ability.
o Improvement : Continuous development of processes due to continuous operational changes and technology improvements presenting opportunities for reducing risk and improving efficiency : • Learning about process problems and opportunities (diagnostic tools and data
management. See Australian standard patent #2021218217 "PredEPrev" and Australian provisional patent IP#2021903942 "HoHoBal"), • Monitoring for conditions that prevent production and or degrade efficiency.
Also see Australian standard patent #2021218217 "PredEPrev", o Interoperability : with structures for International and Cross-Functional Consistency as per IntegTech (separate Australian standard patent IP#2023201021) particularly via specifications TranVec: separate Australian Provisional patent IP#2023900144, ClarZ: separate Australian Provisional patent IP#2023900010 and Heap: separate Australian Provisional patent IP#2023900027.
As a result of operating in different locations, and collaborating with different businesses with different focuses and objectives and different suppliers supplying different product stagings to and from different facilities in different jurisdictions and at different timings, there are usually different data models for similar things. Irrespective, data across different sites managed by different suppliers and possibly in multiple jurisdictions needs to be married up and this is likely to occur via data dictionary standardizations (eg Strictionary from separate Australian Provisional patent Heap (IP#2023900027) via the separate patent of addition IntegTech (IP#2023201021).
While variation between models is often able to be managed within the same conflated data model, versioning could also be implemented with a data model allowing high level relationships with limited functions to convert between named data models and standards, and invoke code in devices designed to work with or produce results conforming to only a particular data model referenced by a named standard. That is, an operating data models can reference more than one standard data model name (a global and a local one).
In supporting the data models implemented by the types of systems required to service the totality of above stated proforma needs, this specification gives a summary outline of how a cross section of representing example transactions (being performed by devices implementing data objects processing for those needs) will operate by sending and receiving smartcontract messages and as part of the processing for that, performing local instructions based on use of DETPAT techniques to manage data used, in a specified manner as construed in this or other referered to specifications. This is a cross section representing the types of functions enhanced by this invention • Machine operating controls - accompanied by the additional coordinations specified in
IntegTech and assuming the necessary hardware developments occur to standardize device interfaces to use DETPAT techniques, many of the transactions listed in this extension of PartnerInTime would benefit from the functions specified in this specification directly specified herin or as included specifications ArrProc (expanding the extent of parallel processing) and Algo (better able to standardize application of matrix processing) : o a work device worn by employees at work that is designed to represent particular employees as worksite authorized individuals. It will receive messages designed according to DETPAT techniques that authenticate the employee as someone who has been accredited to particular training modules specifically associated to the item of machinery he wants/needs to operate and potentially also to production specifications controlling the product he might be dealing with. o Machinery Controllers may compile production records with statistics identifying the employee performing a particular transaction (ie block chained close proximity protocols and or vision signals gated by an input (eg button) on the machine). o The machine may have been programmed to not operate if devices associated with the machine detect an unknown presence or identification (close proximity/vision/etc) indicating that there may be a safety issue. o Different screens on a VoxServe touchscreen workstation (Voxerve: included Australian provisional patent IP#2022901956) might be launched to provide: • different controls to different operators or
• be laid out a different way for less trained operators or • lefthanded rather than right handed operators.
• Measurement of hours worked -The work device worn by the employee at work would record workhours in a way appropriate to the employees and employers needs in suitably private data structures on the employees device. Incrementing of a data item used to produce a fair representation of the work done by the employee would be via transactions constructed using DETPAT techniques. It may be that separate totals are needed for different roles, different locations in the one workplace or different workplaces. Various tranasactions on various devices might be required to inform the circumstances: o Logging in via a touch on a screen, o Using a mobile device to enact a location specific or role specific QR-code, o Voice command registered by the personally worn employee device, As such, a time vector representing just the particular possibilities applying as a slice of the taxonomised time categorization scheme as implemented at that workplace would be anded with the vector of circumstances (state-based implementation) that applied at the time the clock signal makes its entry or increment. Rather than maintaining clock-tic accumulations, work time could be calculated from subtracted times of entry and exit into a particular location/role/circumstance so long as devices are set up to do it.
• Calculation using hours - Implicit in this processing are data objects that inform how the hours are to be used to pay an employee to some award. This is identified by a variable indicating the award structure as well as a vector referencing the subset of data items arranged in the same matched taxonomical way (as a slice of the taxonomy) to informs how the award is implemented for that particular employee at this particular site doing this particular (set of) job/s. This personal copy can be "filled in" including with data structured and obtained from any one of : o Some devised standard employee award structure for a whole jurisdiction, a single employer, a single industry, or groups thereof in one or more jurisdictions (with the structural information maintained by the relevant "community" tasked with managing the digital asset according to the legal requirements of the (lead) jurisdiction which can be stored, added to or amended by members locally as some agreed group of people in charge of doing so and then made available for download). This group would be implemented as a DAO (Distributed Autonomous Organisation) in a named jurisdiction that uses a "company stamp" (a DaoCanRock: separate Australian standard patent IP#2022228135) allowing them to create, adjust, authorize and release new/adjusted employee award structures (as globally available or otherwise protected role-defined accessible 'PartnerinTime' proforma structures) OR o some other different but agreed employment award structure that has the same taxonomically arranged employment partnership structure as that which was specified in PartnerinTime with hours of work interfacing data variously accessible in the areas associated with Performance, Events and KPs and which may or may not be associated with an employing DaoCon. o some other agreed employment award structure (eg that conforms to general contracts as per IntegPplz: separate Australian standard patent IP#2023201205 that can also be maintained by a DAO in the same way as described earlier) using the different taxonomically arranged employment partnership structures and requiring therefore, different interfacing accessibility. o a copy of any one of the above that has been tailored to a specific set/s of people or companies by an authority for that set; Calculation of payment variations based on environmental factors - Where ever this production, payroll or calculation processing involves : • a machine that deals with many momentary live data objects,
* particularly large amounts of conflated data in small processing windows that necessitate
parallel device architecture or • depends on particularly fast processing speed (including via quantum processing that would be
expected to be designed to implement fast processing pipelines). In most of all these cases the DETPAT implementations will necessarily also invoke design specified within the additional specifications ArrProc and Algo here included in order to work in large and responsible processing environments producing data that may be used to implement Al learning.
Further Processing Opportunities 3 - 11
Any higher stakes identification function that requires high throughput (whether in a "backoffice" or otherwise within a community, economy or other public environment).
It is proposed that these would be accomplished using similar techniques to the first two example opportunities but they are highlighted because they are of high social and or economic value : • financial offers that might be scams. A high value government service that mandates that public
offers over a certain value should be registered using a sufficiently authoritative chain of identification and conforms to regulatory requirements. For example references a DaoCon that enables identification by the government's identification system (as distinct from necessitating identification to the potential offer accepter) that the offer was implemented by a DAOCon that is authorized in the jurisdiction and that was attached at the time the offer was implemented to an individual that was not registered anywhere in the jurisdiction at the time as a bad director, and is still not registered at the time of the transaction as a bad director. • Financial offers that might be particularly high risk - eg from unregulated provider (cryptos etc)
• Data Environments that are unowned or are not specified to identifiable sources
• Wage processing • Energy market, retail and control systems • Emergency systems including large crowd control
• Training systems
• Voting
Structurally, the invention hereafter described interrelates several key components: • From existing included specifications :
o Algo: included Australian provisional patent IP#2023900667 o ArrProc: included Australian provisional patent IP#2023900576 o DigETrace including a Device : included Australian provisional patent IP#2022900156 o Voxerve: included devices Australian provisional patent IP#2022901956 • Extended PartnerInTime model (see sections above, but not the PartTme specification itself)
• WaveSnap - specification of further details required to be included in DigEStruct specifications in
order to apply WavSnap stream management (including in previously specified DigEStruct specifications where it was not fully specified); • WaveSnap-Pipeline - specification of extra DETPAT designed structures for implementation of:
o structure allowing inputs to control implementation, configuration and manipulation of a Pipeline; o time displacement adjustments that should operate within the pipeline and other specifically planned pre-processing packages (eg processing processor transfers) where this system is available and pre-processing is required; and • wAlvStream specification of extra DETPAT designed structures for: o Creating and storing data for Al o Processing data in Al Learning o Implementing Instability Assessment processes Discovery and Standardisation of deconstruction packages of learned knowledge o Structures for manipulating, constructing and combining those packages as wAvStream processing o Projects, Structures and functions for increased economic sustainability due to creation and standardization for new shareable, safe, healthy implementations.
The Components
1. Standardise on WaveSnap nomenclatures as per Algo RELDESCRIPTORS
With respect to Algo (IP# 2023900667), there must be a named entry or a named function in the "Financial and mathematical functions" list in the Minimum or essential list of RELDESCRIPTORS in Algo that allows specification of time streams and potentially other streams benefitting from special management in the future.
This specification would need to include one or more of the following selected preferred combinations of:
• some accretion of "SUBSCRIPT" and whatever stream name is preferred following the
subsequent obligatory lexical element "Parameter"; and/or • just a named Parameter being added to the SUBSCRIPT phrase where a parameter is
contained in a widely implemented public or corporate data dictionary (eg from Heap IP#2023900027, potentially a "Proposed Proforma Uniform Catalogue Structure" or SCI CATAL) to specifically include a single generalized or semi-specified set of vectors (that could also include a time vector); or
• explicitly named alternatives including a specific TMESUBSCRIPT in contemplation of a variable's time stream and further "<non/less-time>SUBSCR/PT' designations in contemplation of particular streams being standardized across non-time variables on account of the value of their applicability.
The banking processing opportunity example also suggests at least two other items in the Minimum or essential list of RELDESCRIPTORs (first specified in Algo) for an algorithm node as follows :
• MAXAGE Units
An access by application programmers implementing Algo calculations is required to enable a compile time-specified or runtime-calculated variable (intercepted from a pipeline implemented by the compiler) to be used by runtime WavSnap instructions compiled by the compiler for expected delays to be included in the pipeline allocation routine for fetching "good-enough copy" data from a known local location. It is envisaged as necessary to have yet another RELDESCRIPTOR named something akin to:
• TIMELAGALLOCATION Units
As decided by agreements resulting in standardizations, MAXAGE and TIMELAGALLOCATION could be minimized (eg assume milliseconds always) or further expanded (eg having different pre-processing options for faster / slower architectures : distributed v onchip).
It is envisaged that TIMELAGALLOCATION would not be specified as part of a standardized calculation in a Heap by anyone other than someone catering for a particular architecture and naming their standard calculation for its intended use in that architecture.
2. Include WaveSnap interface to Heap, TranVec, ArrProc and Algo tree structures within DigEOps (DigEStruct Functions and Structures)
Unlike ParCent, not all the previous souped up data integrity-related block chain design specifications in the above heading that have subsequently been associated with the basic
DETPAT design specifications (IP#2021202215 and its divisional #2022203235) acknowledged that delays could affect output calculations explicitly. Where this might be the case (say in creating quantum processing designs) code for launching, constructing and managing pipeline structures would need to be launched by "create workflow" message processes and then these pipelines would need to be linked to the necessary WavSnapped objects in operation but also so they are able to communicate with processes maintaining and offering standardizations for those code objects and the object's processing data definition structures (eg Algo, TranVec) from the Heap (Heap IP#2023900027) Strictionary (or from some other shared data repository similar to Heap's tree'd data definition facility that contains or links to the organisation's own set of dictionary structure standards).
Implementation of WavSnap using Heap's Strictionary structures would involve pre-compiling of DataDefVIDs (standard data definitions) and SelfDefVIDs (non-standardised data definitions) in the Strictionary.
Any of these being single ALGORITHMS or made up of other (sub-)ALGORITHMS whose implemented live output might also be being used elsewhere (ie streamed) or that may be just being referenced, would be implemented using WavSnap set up over particular processors able to have WavSnap pipelines implemented on them.
Interpreting device functions (TranVec IP#2023900144) could be implemented into distributed processing architectures with WaveSnap-pipelines or implemented as onchip pipeline processing. Compilation informed by the interpretation matrix (TranVec IP#2023900144) for a single fast device that is processing multiple other inputs would produce code that interprets according to the matrix and also directs data to defined interfaces to &/or from the actual created pre-processing pipelines already implemented in hardware onboard for higher processing throughput by that device. In such cases, where it is advantageous to compiled calculations with time critical signal components, the extra data required (eg allowedage or the sub-cycle copies) in the block chain leaf nodes allows realistic implementation of higher order control schemes than would otherwise be useful or even possible. Cloud arrays would presumably be implemented this same way if they used these concepts and structures.
When compiled into distributed device systems stored DigEOps records with the new pipeline related DigEfunctions (specified later) would be continuously used across subsequent device maintenance operations to maintain pipeline architectures within shared processing workflows in order to ensure operands would continue to arrive at pipeline addresses.
3. Implement capability for WavSnap processing with WaveSnap and WaveSnap-pipeline information being included in blockchain leaf structures and DigEStruct Structures
The most efficient transmission of vector streams for sub-cycle copies could be the same message informing the longterm storage of the "normal" application DigEOp. That is, past sub cycle values even if found in subsequent processing to be noise could be transmitted within the leaf section of the same individual permanently stored DigEStruct record kept on the normal DigEStruct stack for the parameter at blockchaintime t in that particular application or operation.
For example, for a sub-resolution consisting of 10 loops of a designated cycle period - that could be less than the normal expected settling time for the signal - beginning at say blockchaintime t1 minus 10cycles (covering a 10 cycle periodicity that could even overlap with 10cycle monitor packages being sent each Scycles if required), the sending device would re-record previous values of the parameter in the leafof each sent message (ie as Xbctl1o, Xbcti1s, Xbctl8s Xbct17, Xbctli6,
Xbctl5,Xbctl4, Xbctl-3, Xbctl-2 and Xbctl within a sized array structure set into the message leaf via the standard included structures library for the system) to be sent along with the t1 value "final" value (Xbct) in the message it sends out.
Note: the overhead of maintaining a separate DigEStructs register on a single processor (other than as a variation of DigEOps for version control of its firmware) is thought unwarranted.
However, wherever distributed system implementations requiring such information be preserved, can be designed to be completely independent to normal processing (at only a slight cost to total network performance should a relevant message length be thereby exceeded when adding the extra structures to the DigEOps) whether or not a second copy of the sub-cycle data is always required for permanent storage by the distributed application itself.
In a way similar to how the 7th andpossibly also the 9 t recommendation of Heap (IP#2023900027) specifies preservation of DataDefVId or SelfDefVid information, local code compilations using WaveSnaps could allow recording of sub-cycle copies of data in block chains for runtime use offset according to the timing characteristics of the particular architecture being compiled for, with each message sent out at cycle rate. This could include overlaps (eg 10 sub cycle items every 5 subcycles = one cycle). That is, rather than continually sending out overwritten sub-cycle single copies, the compilation for a more nuanced architecture could produce instructions tailored and adjustable for relevant relativities with respect to further optimizations for the later combined individual subcycle & cycle timing signals.
Compilation of any of the functions requiring it (eg series summations as well as any of the Algo DataDefVid or SelfDefVid subnode references) would have to preserve "depth" information (related to the position of the node within the Algo system relative to the deepest node in the particular calculation referenced) in order to add delay for both relative timing as well as iteration counts in series processing. For distributed systems the total delay offset for both the remnant and the pipelined instructions would also usually include the extra time taken to move the data elsewhere and slot its result back into the originating process.
Compilation must preserve the totality of known timing relativity between each operand at a node (including any to-be-processed-first sub-nodes) for every node of a calculation so that runtime coordination can occur. Where leaf structures with sub-cycle copies are to be messaged out for faster processing at normal cycle rates where it might count, compilation also needs to size the leaf structure to contain the series of previous sub-cycle copies. It would be more complex but most likely also possible to automatically compile a runtime adjustable optimal sized set of "leaf buffered" recordings (stored within the relevant leaf data structure part of the block chain vector). DigEOps Functions would likely require an interface to a variable maintained by any automation of that functionality as there might otherwise be ambiguity when using the envisaged DigEOp function for removing inserted extra delays in a pipeline path.
4. Standardize on data definitions and functions for processing (ie creating, adjusting, connecting into, connecting out of, calculating on, and destroying) pipelines within the DigEOps specification
Industry-wide DigEOp standardisations should be made for WavSnap and Pipeline needs (including those implicitly defined in ArrProc, Algo and QMakeCtrl) where "part-centralised" processing is useful in peer-to-to-peer processes (ParCent). The standardizations should include standards for configuration of code using them to ensure that industry-, national- and or international- standard defined calculations of standardised types of data will result in architectural standards for pipeline processing within block chain powered devices.
The DigEOps required for that was informed by the distributed device model management feature of QMakeCtrl (IP#2022259783) and the recommended DigEOps system standardization proposed in ParCent (IP#2022903608) for the explicit allocation of parcels of processing between distributed but part-centralised functions on different devices. A standardization on functions for WorkFlows expressly for processing pipelines (ie creating, adjusting, connecting into, connecting out of, destroying) and other architecturally important items are suggested for the DigEOps specification so that objects are able to create the pipelines and configure from code the instructions required either on the local processing hardware (with no network data message needed) or as an object with that role on another device (that will be looking to processing a particular received message according to the messaged vector, the object's own implemented code and referenced parameter lists obtained from a Standard Algorithm (or sub Algorithm) (as defined in Algo (included Australian provisional patent IP#2023900667) in order to perform that role allocation).
It is envisaged that processing is able to do something akin to the following as best sits with similar types of configuration implemented in current language syntaxes for processing structures to be standardized upon for implementing :
• A processing input - (already) a message but messaging that is architecturally required
for stream/pipeline processing should be registered as a "WorkFlow" for architectural maintenance purposes as defined in the DigEOps function list of ParCent (Add Workflow <workflow name>). • Whether for the purpose of implementing local or across the network processing, or
adding extra processing power in parallel, objects are attached to an operation and produce outputs for operations locally via roles (including in architectural configurations to define local processing connections) or for processing elsewhere via services or activities related to a process-scheduling interface configured suitable to a (particular stage of a) data item (as referred to in the specification according to Algo, Heap and or TranVec).
• Output references as messages or as local processed outputs are required to be added
to the object for reference in object's code as in ParCent's (Allocate Object a Role). • Message Acceptance Criteria (as defined in ParCent) can be adjusted in a variety of
different ways but over time, or perhaps the best of existing implementation will inform, and whichever variations offer the best flexibility and safety required should be used (eg Change device : Re/Classify Device <devicelD><Device Type ID> or Change object : Batch update Object <Object Type><Named schema spec > : or potentially any one of a plethora of different ways).
Syntax implemented and processed according to the Algo specification is indicated with a list including Financial and mathematical functions as well as those related to "power of" as well as transforms implying series components (esp fourier) and residuals. Implemented as standard algorithms maintained in compiled Heap tree structures (as specified in Algo) means the fourier component terms datatyped via the WaveSnap nomenclature extension set of RELDESCRIPTORS (Item 1 of this specification) is able to refer to these via appropriate subscripts without unilaterally defining what the scheme should be.
Having specified that a term for dealing with such series exists to do it, the WavSnap-pipeline specification includes that the code compilation is able to compose a virtual set of Algo tree branches representing the further terms of the iterative (and residual) terms of the entire calculation implemented by invocation according to the specification of the Algo standard being referenced. This implies that specification of calculations to be used within an implemented system must also include the number of terms developed or that apply (in the case of financial models) with the appropriate residuals. As such this variable is also of architectural interest to the firmware of a processor as well as the compiler constructing code for it as it must construct allocation to appropriate pipeline paths according to the remnant depth of the node being the distance of the node from all its component operand sub-nodes.
Other application related situations (eg the banking situation previously related to a new Algo RELDESCRIPTOR/S for managing data age) highlight the need for at least one other compiler processor "operating system" interface where an application programmer implements code variable driven additions to the delay being catered for in the WavSnap-Pipeline system.
Similarly, the compiler and the pipeline construction functions have to make extra accommodation to correctly allocate sub-node instructions to pipeline paths allowing for the addition of tranvec processing of operands at the "end" of the tree.
It is necessary to include some new functions (or an alternative sensible rendition of them) with these new programming instructions to the list of Potential DigEStruct functions for DigEOps developed in ParCent.
The new functions
Function Comment
Allocate an <ObjectlD> to a <NodelD> - Nominates that an Object is part of one or more algorithms (NB: a node can be specified within one
or more algorithms)
Include <WorkFlowlD> in an <Object> - Enabling definition structures of the WorkFlow (including configured TranVec processing) for an operand of a node
Add Authority=<DaoRock=DaoRocklD,JID, MANDATORY > to WorkFlowlD - Access to structures informing coded checks as per the listed functions of DaoCanRock (IP#2022900007)
Add Authority=<Person=PersonD,JID, MANDATORY > to WorkFlowlD - Access to structures informing coded checks as per the listed functions of DaoCanRock (IP#2022900007)
Add AlgoNodeVector=<Algo4=NodelD, Algo5=RELDescriptortype, Algo6=RELDescriptorl, Algo7=RELDescriptor2> to WorkFlowlD - Access to shared structures accessing authentic code for and informing of shared standard calculations as per the listed standard algorithm of Algo (IP#2023900667)
Add TranVecSchema =<SchemalD> to WorkFlowID Sends the Schema = SchemalD for implementation
of the TranVec structure IP#2023900144 (as a matrix as per the Interpreting Device specification of TranVec or which might actually invoke sending of a list of vectored parameters as per below)
Add <DefinitionName> =<definition> to WorkFLowlD - Anything already listed or envisaged for datapoints models or variables in :
• QMakeCtrl : name, descriptor (text), physical
position address descriptor, device specification, device address specification, min value, max value, maximum allowed maintenance cycle length, currently_outOfServiceBar, managed by (eg IP#2022228135 or better method including an individual identification via relevant key), minimum poll frequency, maximum poll frequency, version, owned by, maxallowablelagtime (mseconds) or other variable/s proxying for required poll frequency (this cell row continued over the page) and any other useful item,
(continued cell row from the previous • Parameters contained in TranVec specifications, page) • Parameters listed or implied by specified
ParCent processing including cost, lag times, restrictions (eg timeof day), trigger levels, mandatoriness/Sufficiency etc,
• Other processing parameters : minsettle-, lag-, cycle-, tolerance- times,
• Owner as PersonlD,JID,
• Copy-, Destination- specifications,
• Trimlength, AgeToDispose,
• Usage eg Allowable Al specificationlD, excluded device typelD, Device typelD allowed, Excluded JurisictionlD, Allowed JurisdictionD, PostProcessingBlock (ie not available other than the designated output node), Block chain standardlD (if not implemented in Algo), a particular global time reference confirmation devicelD, OVERRIDE PROCESSED DATA TYPE, BLOCK CHAIN authorizing identification transaction devicelD , state processing handle for personal device as an interface,
• Anything else required.
Keep <Keep#><NodelD> sendperiod= <period> units <units> with <Send#><init> - Store <Keep#> of the most recent copies of the NodelD output permanently in each record but message out <Send#> copies as the most recent every sendperiod= <period> units <units>.
<RolelD>. My(<NodelD>) <N> - Incode reference to the Nth previous value of this object's current NodelD for the WorkFlow (=Role) (as used by compiler allocating an instruction into a particular pipeline designated as <N> as the appropriate relativity for the data currently at that node assessed as appropriate to the total Algo/Series-approximation/Series-duration/Tranvec "depth" and any compiled in application-specified ADDtime's for the part of a calculation existing as an instruction at a particular node.
Pad WorkFlowlD TIME <#> msec - To improve parallel path load balancing at the Workflow (=a device object role) via an adjustment where <#>=O means no padding. This is envisaged as being a possible runtime operator adjustment.
ObjectlD TIME <#> msec - To allow device level load balancing. This is envisaged as being a possible runtime operator
adjustment.
ObjectlD TIME 0 - Bring device back up to full speed again. This is envisaged as being a possible runtime operator
adjustment.
These are an extension on the ParCent methodology being already asserted in any base device environment (ie as invention IP#2021202215 or IP#2022203235) or any that are clouds or discrete wearable, carriable, concealable, attachable or otherwise devices.
5. Standardize on data standards through additional DETPAT structures, data models and tools for processing Al (WAlvStream)
The standardizations should reflect and where necessary extend to cover emerging gaps beyond the needs of historic matrix processing such as :
• For maybe 40 years la place transformations between frequency and time domains have
enabled digital sound. • For maybe 35 years construction of digital video signals to include other graphics has been
accomplished using matrix processing. For example, transformation matrices enabling 2D to become 3D or distributed surround system views and also processing of information about changes to graphics over time are transformed to a screen map and conflated with the screenmapping (or map processing) that is occurring along the timeline of the digital video stream. Weather reporting using GIS type information has made static displays a thing of the past. • For maybe 30 years Roto applications have enabled graphical "erasures" in video streams
and enabled computers to make decisions based on learning from previous data. • In the last 5 years Al processing has been applied to "retail" technology including assets
implementing identification because chip performance has met software offerings with sufficient response to people's needs for a sufficiently low price.
Block chain extensions are required because unfortunately, besides the majority legitimate and commercially attractive efforts applying this type of processing, there is growing nefarious use : faking voice recordings and fake videos with identifiable lifelike faces and "spoken words" being easily mistaken as "evidence" but bearing no relation to truth. As well training data conforms to the universal truth of "garbage-in-garbage-out"
Poor instantiations of Al training are capable of producing serious risks to individuals and communities eg false criminal convictions, power system and economic system collapses, while at the same time other instantiations offer valuable new and promising efficiencies (eg a standard hardware/software environment for widely variable experiential training and education) and potentially affordable solutions for some of the most insidious and intractable problems faced by various societies (eg suicide detection, digital experiential mental health treatments).
It is therefore high time to implement protections against the possibility of such evil without crippling the possibility of good that is just as likely.
The context for this part of the invention is an acknowledgement that MANAGING DATA RISK is important if there is a possibility that the data will be used to create or change systems that operate where there is risk, requiring that :
• People must understand risks include business risk, safety risk and legal liability.
• When using Al for the purpose of developing new systems, environments, and assets (including datasets) and or for the purpose of enhancing existing systems, environments, assets and or data sets, certain data management processes are needed :
assessment, planning, migration, testing, authorization, implementation, maintenance
and or similar,
to reduce the risk of Al being used irresponsibly as a result of poor learning data. Because safety/financial and other risks in certain implementations are likely to be high, this invention makes provision to enable data designations and or process controls for sufficiently assuring high risk Al learning quality processes in situations where they may already be - or may yet become - very important. The provisions of this specification include controls specifically related to:
o 5A. Creating and storing data for Al o 5B. Processing data in Al Learning o 5C. Implementing Instability Assessment processes
• Statistically meaningful measures of the quality of all learning data processed by learning Als that produce Al-informed Processing Modules should be explicitly recorded against every Processing Module version in order to inform of the risk for error and or of the possibility of bias in using a particular version of the Al-informed processing module. • Controls or processing that implements Al learning should exclusively produce outputs
explicitly inheriting the lowest level of input component risk and use data only of appropriate age relative to system changes that may have occurred. • Controls/processing that implements Al learning into components should default to
operate in only certain high integrity ways such as "XXXX outputs must be above levels corresponding to particular levels of hazard and operational risk" (eg an Al learning control output becomes "unusable in hazardous environments" when learning data may have been tainted by unknown sourced learning but such problematic data could in certain circumstances be "Allowable in virtual non-physical gameplaying environments by users aged 18+ so long as it is labelled "Unknown Source of System Behaviour, Environment and Strategy" or some such). • Controls on modules that implement Al learning should conform where necessary (as assessed by the potential risk) to : o standards that clearly define simple as well as more flexible constructed block chain controls over security, ownership, privacy and accessibility for each and every possibly relevant Al learned component (eg SafeXShare: separate Australian standard patent IP#2021900281); and o be implementable via only highly reliable control mechanisms (possibly even with legally classified registrations) that exclusively allow application of these controlled components based on explicit accountability and or evidence of a relevant data authority regarding properly maintained minimum learning data standards and data storage of each of the Al components relative to a projected level of acceptable system risk for the output; and o producing outputs (including reports/decisions) that can only be implemented via a second authority informed of the quality of the learning and the particular implementation it is authorized for (ie controls putting the output in place as well as the controls on the report/decision or system being made of it and authorizing the pairing as appropriate with all due regard to the obgoig use of outputs elsewhere. • An acknowledged assumption that all data will eventually be used for Al learning and that therefore all data and produced learning controls should be characterized with parameters sufficient to produce acceptable assessments of its quality and or to be presented acceptably for use in Al learning including with effective data depersonalizing. This implies data uncharacterized being automatically, legally designated by producers or owners as "UNUSABLE" or "Dubious quality" (or similar) unless it is legally certified by the sufficiently registered owner otherwise to the standards of legal accountability of the jurisdiction.
SA. Controls for creating Data that may be applied with or learnt into Al now orin thefuture
Strictionary (or similar) - It is envisaged that data for Al learning should be created using the same controls recommended in the WaveSnap part of this specification including DataDefVIDs (standard data definitions) and SelfDefVIDs (non-standardised data definitions) via an implemented Strictionary (See Heap: separate Australian Provisional
patent IP#2023900027) or similar. This also requires carefully maintained accesses to (ie including protection) of the Strictionary and dating and storage of the created data items (using information stored in the DETPAT leaf record/s of the block chain controls) to ensure any need for migration between versions of data definitions (that may have been upgraded or reorganised) will be assessed and therefore appropriately managed before the learning from or other subsequent processing of auxiliary data sufficient for purpose and quality can be authorised and made. According to the maintained quality standard required (in many cases none), unsigned off assessments or incomplete migration/testing plans would result in further Al processing of the set being rejected or would require additional authority to proceed. Until there are strong technical controls preventing irresponsible use of data produced through GIGO (garbage-in-garbage-out) methods all asserted use of data produced using data dictionary references should be recorded to include the legally responsible authorities that maintain the data dictionary as well as the health of data issued from a system producing data for use with Al of any designated quality. This is likely to limit many instances of high value Al use to corporations for a good period.
A-1. Grouping - A Function for logical grouping using relative location of self-devices and or other devices as boundary devices when configuring data for Al use. Rather than needing to tag read every object relevant to a learning situation into a group with some Q-tag or other reading device, this specification includes a method for including other things into the group using nominated boundary devices (including the naming of parcels in the group as one of the physical boundaries of the logical group) so that items selected by the operator's own arranged circumstances or by using Q-codes can be app-identified boundaries (including beacons, pallet tags and/or MakTrck product item tags) for the purpose of capturing a logical group of objects that can then be listed for a location data group, further processed to enable the limits of the group "addressed" by group-only conditioned messages for later data operations. It could be used to fetch data related to particular objects for the pupose ofde-personalisation.
B. Oblige users to use a well controlled Al learning data modelling environment
This invention recommends that the list of items for MANAGING DATA RISK earlier is a required part of responsible Al use. A tool environment that is widely educated (in schools and higher education institutions) should be developed. Entre to it should be encouraged or included as part of cloud environments.
B-1. Devices - At its simplest eg for report writing of bound situations as authorized by authority/ies responsible for the system/s sourcing data, standards could mandate use of a block chain device for Al components such as the "DigETrace" device (Australian provisional specification IP#2022900156) of this invention, specified for an adjustable datamodel related to the needs of managing data captured from a single item of relevant equipment and processes stretched across one or more different facilities and across any one or more lifecycles of equipment, processes and products relevant to the item of equipment for Al objectives.
For more generalized complex scenarios the invention AuMake (separate Australian standard patent IP#2022215299) raises the possibility of learning and preventative activities with respect to health and a wide variety of general workplace, agriculture and community processes. The TickTraceE (separate Australian provisional patent IP#2021901140) specification defines a controlled collaborative research environment. Implementations could combine the beneficial structures of the AuMake or DIgEmake specification into the functions defined in this specification, but they are not part of this specification.
B-2. Mapping - A further function for producing reports or digital representations in spatial or other representational maps of one or more groupings in a graphical analytical manner that allows relating to a series classification of particular parameters as well as particular designations characterizing the domain, and could be implemented to filter them by selecting via pivot tabled arranged set names captured according to their parameter series classifications. This would be used in modelling/simulation/training tools with a touchscreen menu map tablet (as per specification of Voxerve: included Australian provisional patent IP#2022901956) that could also implement other processing eg designations to lists of "OUTOFSERVICE" objects/devices and/or other processing via moved or copied objects within one or more other/new non-Live designated representation maps/list that can be :
* filtered
• selected from * copied or marked for change; and or * written into,
as a member of a grouping somehow related to a particular set of dimensions including spatial, time, frequencies etc so these can be separately accessed conveniently for processing within the modelling/simulation/training/reconfiguring task.
B-4. Access to proforma cataloging systems - Where industries are already using Al (eg studios building CGI effects in movies) they might be willing to share their structures implementing library systems of digital assets with community or other professionals who would also need to responsibly develop Post-production and post-publishing processing facilities when teaching Al to seek patterns in produced data within the bounds of data privacy as per TickTrackPLUS (separate Australian standard patent IP#2021232845).
B-5. Other Graphical Manipulation/Learning Functions
• Auto Slip Test Capability and Plans- with multiple coordination points defined by
multiple fields of either absolute or relative terms • Stretch and or compression of series between coordination points with redesignation
confirmed into block chains • Direct altering and or matrix implemented conflations related to or by the above with
stored rendered processing adopting the necessary downgrade of quality/authentication and being included via the data definition in the block chain • Slide actions (like in a movie editing packages but also working for non-movie stream
sets of data) • Explicit visibility of transfer functions produced by Al including for frequency transform
manipulations (signal transpositions as well as slippage in time domain), testing, comparisons. • Regenerative learning path builder with proforma paths.
B-6. A regulated device for a standard professional Al development environment - It is envisaged that the risks and value of controlling more generalized and/or commercialized functions may deserve further detail of the above functions and that a facility building them as a standard professional Al developers environment may benefit from implementing on a recognized standard professional Al development device (potentially regulated).
This would be an enhancement over the DigETrace (Australian provisional patent IP#2022900156) included in this specification and different to any of the TickTraceE, DigEMake, AuMake specifications whose designs are referenced here, asserted elsewhere but are not included in this specification.
The functions controlled by that further device would include functions given by this specification, including with the further detail required to implement items from the last section of this specification ("The Further Opportunity").
C. Oblige users to use instability assessment including classic control theory analysis techniques on Al-taught digital transfer functions used in public and other high risk systems of operation being enhanced with Al
In potentially identifying new relationships and then exploiting these within systems, there is a chance of introducing unnecessary and potentially dangerous instability into important systems of control.
In well-established classical control theory the mathematical features of any transfer function (ie a mathematical description of a control system, including one produced by Al learning) expressed as a summation of cyclical (sinusoidal etc) sub-functions can be explored, manipulated and checked for characterizations indicating its potential for instability under certain excitations ("poles in the left hand side of the plane").
Such a potentially unstable system often has sweet spots that favorably propel "solution approximations" that are often less complete than they need to be in certain situations. When used without controls for those situations the system produces errors in a similar way to how (in the "what is apparently considered a less risky experiment") some people are arrested due to erroneous Al recognition systems misidentify people, apparently due to their trainers' use of biased data when "regeneratively" and keenly training the Al with insufficient process (checking biases, testing widely). While such results are only disastrous for a small number of people (!that might be considered acceptable if you are not one of them!), where a result can be an erroneously controlled power supply system, economy or retailed self-driving car, things are a little more pointy and new confirmation approaches should be constantly researched.
The exploration, manipulation and characterization of transfer functions is empowered by the techniques expounded in module Algo included in this specification as well as the opportunities for processing many iterations of test data using the design expounded within this specification. While the whole process of iterative mathematically reproducing a transfer function (produced by an Al learning system ) as a set of series sinusoids is not reproduced here, it is well established mathematical process (Fourier analysis) and therefore unlikely to be patentable.
The external framework for setting it up is already in use by any responsible Al module proliferator, and has been summarily asserted in PredEPrev (separate Australian standard patent #2021218217). A framework for conveniently checking biases is asserted as HoHoBal (separate Australian provisional patent IP#2021903942) which has been bibliographically associated with PredEPrev as well as DETPATDIV (separate Australian standard patent IP#2022203235), being the patent onto which this will be projected as a patent of addition.
It needs to be a mandatory step in public and high risk Al enhancements to check for stability by subsequent process steps based on either :
• performing fourier analysis on each decision parameter function against series
parameters (usually but not always streamed as time) that is produced by the Al assuming it has produced them and they're accessible. Large holes in learning data across the whole produced vector, as data biases may manifest as important detected instabilities when tested in complete profile on the transfer function that may have been analysed as undetected instabilities otherwise. As such some future input parameters landing outside the learned solution space to the point that the difference between the decision classifications is being made with low confidence is at risk of producing erroneous results, * if there are no available decision functions, explore modelled summations by iterative
testing (theory - confirmation of researched success is needed) where : o the profile output of an Al-taught module responding to statistically meaningful sets of data is compared against o the profiled results from a separately modelled system that is composed from variable contributions of each and every one of the profile frequency summations that are separately fourier serialised candidate components, and each with the appropriate residual component according to the number of iterations (ie serial components) used.
It is proposed (and possibly already tested and confirmed elsewhere) that there is always some combination of the top level variable contributions (ie for the total of all modelled fourier contributions) that will produce a sufficiently similar transfer response profiles to what the Al-produced system uses and this should be tested over a statistically meaningful sized set of data containing sufficient items of data for each frequency.
It is likely that only a very fast computer (eg one using quantum processing via a pipeline architecture designed specifically to manage fourier and or other iterative componentized calculations, one the key features of this specification) would be realistically able to determine that set of contributions.
Further analysis (not included here) of those transfer functions can resolve each function as a series of poles on the imaginary plane to indicate likely instabilities where those poles are in certain regions and there's an odd number of them. NB. This needs confirmation by a classical control expert.
The further analysis will be much assisted by the data manipulation tool functions suggested in section 5b including the setting up of paths for Al training as per the kind of manner summarily discussed in ShingleMake (separate Australian standard patent IP#2022224795).
These proposed stability checking techniques and the likely many others using iterative processing developed over the last 30+ years will be better enabled through fast processors including quantum processors) able to implement WavSnap-pipeline environments according to the demands of specific and assured sourced algorithms that are developed, maintained and mandated (via techniques and structures specified in Algo (included in this specification) that are here included as part of the Patent of Addition assertion here made, or the specification Heap (separate Australian Provisional patent IP#2023900027) asserted as part of IntegTech
(separate Australian standard patent IP#2023201021) which would also benefit from this same patent of addition.
THE FURTHER OPPORTUNITY of implementing item 5 (Al data Processing and Assessment processes)
The set of TRUTH ROBUSTNESS adjustments suggested in item 5 will make socially and economically valuable new opportunities more realistic with greater efficiency because better tested and more assured data producing Als will mean that the Als will be less error prone, and people and systems will be better protected from harm related to enhancements using Al if the Als are less often erroneous.
The method for implementing new and acceptable opportunities from TRUTH ROBUSTNESS measures would include :
i. Create new Al proposition formats in health, safety, education, energy management, manufacturing, finance, development and other domains. This is not a technical function. It is the identification of incentives by others (especially the general public) to provide data and for other groups of people to train systems to use it. Incentive to provide data likely implies ensuring people will not be compromised by it - privacy of supply and depersonalization. ii. Distil and design standardized processing paths for these propositions relative to relevant formats and other sources of Time based value and their parameterization- Weather temperatures & wind, human learning records, development priorities, processes. One of these already exists and foreshadowed in AuMake (separate Australian standard patent IP#2022215299) because its use by entertainment producers for CGI assets based in imaginary or real environments on real people who do not have to do the impossible things that are "virtually" happening. CGI technicians have been busy deconstructing elements of
representations (of environments, things and people) so they can be graphically combined (using matrix manipulations) into new representations that are not as real and in some case unable to be real. Their work is valued and valuable and the technologies for reproducing CGI with Al is proliferating. AuMake suggested the use of Al for exposure therapy on the basis that library systems of the digital assets to produce them would be responsibly developed with Post production and post-publishing processing facility asserted in TickTrackPLUS (separate Australian standard patent IP#2021232845) to allow users to seek patterns in produced data within the bounds of data privacy. If Al is to be used in public health systems this level of control needs to happen now. Block chain is likely part of the needed control solution. The use of Al on this data requires particular applications of block chain covering structures enabling the protection and assurance of data as functionality variously described in components of IntegTech (separate Australian standard patent IP#2023201021) as well as this specification. It is likely that CGI developers have already implemented tools discussed in item 5 including with structures based on storytelling. It would very likely benefit to develop timeline or other series parameter plotted structures for orthogonalising: a. learning, b. environments, and c. events as part of a research project so that it can be used as a framework for implementing future deconstructed lessons, experiences and stories in entertainment based experiences.
iii. Explore different ways of characterising the data for potential deconstructed components through : a. designing alternative formats (eg using Al) and processing to suggest alternative sets;
& b. in particular using regenerative Al processes in controlled circumstances (eg research projects) on high confidence well-understood data sets to identify possible correlations indicating possibly less-visible high value regular patterns that indicate value in A'd simulations using deconstructed component elements derived from data sets from diverse sources. (See PredEPrev : separate Australian standard patent #2021218217). For example using data from several similar situations may highlight a visible time-offset "cyclical" performance relationship between incidence of failures that is related to
cyclical or even easier spotted patterns in one or more of the length of some piece of nearby equipment, the speed it operates at and or the size of some involved gears - an apparent potential resonance problem. Include bias analysis (HoHoBal: separate Australian provisional patent IP#2021903942). iv. Create new Al proposition formats by analysing new combinational formats for those deconstructed and or transformed (Laplace, Fourier) domains in time and ALSO against non time variables (ie time vs some novel displacement or some measured non-x-y-z/displacement item of data) on account of the possible value of their more systemized applicability (ie the possible common use of standardized series of something <non/less-time> such as frequencies,
3D grid references for space/air/sea/landforms, 2D/3D grid references for air/heavenly bodies, radial location map, spatio-functional maps of the human brain, spatio-functional maps of the human face, 3D/radial screenviewspace, plotlines, growth nodes (eg different tree structures), dispursal modes (think fractals), wavefronts, profiled frequency responses (eg power system frequency test signal response profiles) etc). v. Implement the propositions with structures fitting the relevant domain SERIES vectors ie as time, displacements and where useful include maps using SMAPS on DigETrace devices or similar on sufficiently controlled and attached cloud files (cloud attachment specification not included). vi. Choose and design commercial and other socially incentivised processes for the purpose of having a wider variety of minds producing component control modules based on the most successful <non/less-time> related standardisable series components using acceptable data risk quality matching (as an extension of the same way motivated people already used regenerative Al to develop deep fake video streams). vii. Complete the proposition for these trainers and other potential users (subject to permission by collaborators of course) by developing easy-use proformas to encourage simple security, ownership, privacy and access block chain control models that can be attached to the tools of each proposition and allow them to use and attach similar to groups of their own components with thorough data controls matching the preferred use of the data and developed proformas in line with any relevant possible legal requirement and operation (ie with attachments to audit data, sample testing results, proxies, transfers, payments, contracts etc) viii. Implement the new Al proposition formats and components within the relevant new Al propositions of step iv. including with block chain controls processing able to manage, manipulate and effectively depersonalize or de-identify their own accessible appropriate data for possible wider sharing within all the necessary systems components of one or more currently or future collaborating users who will also reference their common use of standard data definitions (eg Strictionary) sufficiently to also have their data acceptably processed as learning data and that can also be implemented into easily used select and re-construct pivot table like tools for new modelling/simulation/training processing systems including with WavSnap pipelines. ix. Continuously seek and promote standardization.
Claims (4)
1) Claims for Basic IntegTechPLUSwd operation method include: a) Using distributed digital software mechanisms accessible via networks and employing advanced digital technologies to implement fast processing including pipelines on single, distributed and or array devices including : i) existing and new data models including with further other definitions inputting and controlling pipeline processing, ii) data and work flows able to be configured by instructions and or data input by compilation processes and or standard message protocols, and or DigEOps functions, iii) cloud processing or devices able to be implemented as a DigeTrace device, AUMake device or any other similar device, iv) with structures able to confirm the nature of referenced data and for recording quality assessments and or various identifications including of personal identifications within one or more records related to system, component and or data accountabilities; b) Implementing and testing schemes of control developed using Detpat techniques with reference to : i) controlled data management functions, ii) a body of controls able to ensure data is produced as block chain protected data that includes protected specifications of the time the data was produced or modified and to what specification the data is keyed to including well-controlled dictionaraies, iii) other devices; c) Apply the architecture with advanced digital technologies where this is implemented including via block chain techniques.
2) Claims for optional IntegTechPLUSwd operation method including any of: a) Proposing to suppliers to work on a joint IntegTechPLUS OS and Al development project based on block chain controls implemented via a messaging system using techniques as per Australian Standard patent#2021202215 ("Detpat"); b) Implementing new opportunities of TRUTH ROBUSTNESS measures; c) Using distributed digital software mechanisms accessible via networks and employing advanced digital technologies to implement fast processing including on board or distributed pipelines for accountable processing where high public value or risk requires i) data models to reference assured shared algorithms, ii) data models and tools controlling use of Al, Al component devices and diverse storage of large data-intensive records with de-personalisation functions where required, and iii) Al components being implemented on assured devices; d) Implement and test schemes of control developed using Detpat techniques and block chain controls appropriate to key functions, assurances, policies and bodies of law with reference to any Al components that might be regulated.
3) Claims for Implementing new opportunities of TRUTH ROBUSTNESS measures include any of a) developing timeline or other series parameter plotted structures for learning as part of a research project so that it can be used as a framework for implementing future deconstructed "lessons" in entertainment based experiences; b) provide prototypes of new combinational formats for deconstruction and or transformation (eg Laplace, Fourier) and ALSO against useful non-time variables; c) sets of proforma block chains implementing security, ownership, privacy and access controls that can be attached to tools and be attached to groups of components in line with relevant legal requirements and commonly needed operations iincluding attachments to audit data, sample testing results, proxies, transfers, payments, contracts and others; d) Tool that easily implements controls including block chain controls able to transfer authentications through the process of effectively depersonalizing or de-identifying controlled groups of data for sharing including with reference to controlled and auditable typing to a standard data definition facility (eg Strictionary) e) Tools that only allow data to be processed as learning data according to policies implemented in block chain controls (eg the lowest common denominator with respect to any processed source f) Pivot table like tools able to select and re-construct sets of data for new modelling/simulation/training processing systems g) Models for orthogonalising learning, environments and events h) .... Whatever else that may be patentable that is mentioned in this spec and that might not already be around....
i) j)
4) ADigETrace device comprising: a) one or more memories; b) with one or more processors, communicatively coupled to the one or more memories, to: i) receive a communication that is an app to configure the device to use or share information from as part of a designated IntegTechPLUS system or subsystem, ii) receive information identifying one or more self-support actions performed by a user in relation to a data model and code being shared in the designated system for personal devices of various categories, iii) install the data model and code as structures related to an interface on the device, iv) receive further messages related to processing implemented within the structures supplied for coordination with the programming of the designated system; c) communications to sub devices, processors or memories able to provide measures or signals from instrumentation to be referenced in code installed on an autonomous data-intensive intervention or measuring device to check usage, service requirements and error conditions; and d) where required communications to sub devices, processors or memories able to provide dialogues configuring instrumentation from code installed on the DigETrace main device and performing according to received messages including IntegTechPLUS or DigeTrace certification functions.
Applications Claiming Priority (13)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
AU2020902146 | 2020-06-26 | ||
AU2020902146A AU2020902146A0 (en) | 2020-06-26 | BAC - An explicitly assigned security device receives messages from possibly diverse sources. It arbitrates and records any relevant authorisations it is programmed to give, in an appropriately auditable record as it updates and passes messages on that will allow a particular other prescribed device to begin operation according to the device’s own programs that interpret the updated message that it receives. | |
AU2021103348A AU2021103348A4 (en) | 2020-06-26 | 2021-06-15 | BAC: BlackBoxAuthority Coordinator is a device and systems for implementing it to coordinate multiple other IoT devices. It is suitable for a wide variety of situations and applications. |
AU2022900156 | 2022-01-30 | ||
AU2022900156A AU2022900156A0 (en) | 2022-01-30 | DigETrace – methods, systems and devices exploiting advanced digital technologies in distributed environments with distributed devices (including cloud environments and connected possible remotely operated data-driven autonomous OEM devices of diverse kinds) and where collected data from those distributed systems is made accessible to AI and other processes to improve and aid in diagnosis or empower process improvements where accountability in use and design of diverse collaborative activities is important to the wellbeing of the public and enterprise. | |
AU2022201926A AU2022201926A1 (en) | 2020-06-26 | 2022-03-21 | BlackBoxAuthorityCoordinator (BAC) is a device able to receive IoT messages from diverse sources and when installed can be programmed to initiate safe operation of equipment and can provide audit of changes to its control of operations and all subscriptions to its authority which as needed are stored locally, copied and/or uploaded to a secure backup location via messages or other protocols with provision for authentication and where necessary timed persistence of received instruction messages and/or position and status information. |
AU2022203235A AU2022203235A1 (en) | 2018-04-24 | 2022-05-13 | Divisional of DETPAT |
AU2022901956A AU2022901956A0 (en) | 2022-07-13 | VoxServe – methods, systems and devices to display and transmit voice configurable information that is selectable for the purpose of interaction within advanced digital transactions including block chain transactions sufficient for a wide variety of different transactions using advanced digital techniques. | |
AU2022901956 | 2022-07-13 | ||
AU2023900576 | 2023-03-04 | ||
AU2023900576A AU2023900576A0 (en) | 2023-03-04 | The ArrProc specification is an extension to the invention for implementing consistent asynchronous, non-linear, localized, state-driven and/or condition-based procedurality in distributed systems (and enabling safer application of AI learning, and extending the invention ParCent (separate Australian Provisional patent IP#2022903608) so that processing models of all kinds can more efficiently processed on highly parallel mass processing architectures (eg quantum device arrays) including for collision detection within peer-to-peer architectures. | |
AU2023900667A AU2023900667A0 (en) | 2023-03-12 | The Algo specification is an extension to the invention for block chain processing for consistent asynchronous, non-linear, localized, state-driven and/or condition-based procedurality in distributed systems (and enabling safer application of AI learning) for distributed information system processing where algorithmic approaches should be produced, systemized and managed in a way to be shared that is as consistent as possible with other shared systemizing structures. | |
AU2023900667 | 2023-03-12 |
Related Parent Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2022201926A Division AU2022201926A1 (en) | 2020-06-26 | 2022-03-21 | BlackBoxAuthorityCoordinator (BAC) is a device able to receive IoT messages from diverse sources and when installed can be programmed to initiate safe operation of equipment and can provide audit of changes to its control of operations and all subscriptions to its authority which as needed are stored locally, copied and/or uploaded to a secure backup location via messages or other protocols with provision for authentication and where necessary timed persistence of received instruction messages and/or position and status information. |
AU2022203235A Addition AU2022203235A1 (en) | 2018-04-24 | 2022-05-13 | Divisional of DETPAT |
AU2022203235A Division AU2022203235A1 (en) | 2018-04-24 | 2022-05-13 | Divisional of DETPAT |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2023204365A Division AU2023204365A1 (en) | 2018-04-24 | 2023-07-06 | FrAImWork – consolidations of DETPAT invention (block chain processing ...) helping ensure AI is applied safely & can be effectively regulated for wide use in distributed information system processing of any kind. Extra consumer protection models & levels of data standardization & verifiability are included to better support AI making support AI making better trans-economy & trans-industry productivity improvements, & enable a controlled environment for AI-enhanced activities related to confidential information including related to health risk prevention & confidence in emissions verity. |
Publications (1)
Publication Number | Publication Date |
---|---|
AU2023202418A1 true AU2023202418A1 (en) | 2023-07-27 |
Family
ID=77274368
Family Applications (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2021103348A Ceased AU2021103348A4 (en) | 2020-06-26 | 2021-06-15 | BAC: BlackBoxAuthority Coordinator is a device and systems for implementing it to coordinate multiple other IoT devices. It is suitable for a wide variety of situations and applications. |
AU2022201926A Abandoned AU2022201926A1 (en) | 2020-06-26 | 2022-03-21 | BlackBoxAuthorityCoordinator (BAC) is a device able to receive IoT messages from diverse sources and when installed can be programmed to initiate safe operation of equipment and can provide audit of changes to its control of operations and all subscriptions to its authority which as needed are stored locally, copied and/or uploaded to a secure backup location via messages or other protocols with provision for authentication and where necessary timed persistence of received instruction messages and/or position and status information. |
AU2022215299A Pending AU2022215299A1 (en) | 2020-06-26 | 2022-08-12 | AuMake: Systems, devices & methods for monitoring, accepting input/declarations and/or assisting, controlling, collecting data or performing curated processes with accountability and permission-driven data security : configured in internal systems, or piggybacking external processing via block chained collaborative modules of education, therapy, conditional processing (eg registering, tracing), monitoring, treating, alerting and other configured services or regulatory processing in diverse settings including use in at-home care. |
AU2022259783A Pending AU2022259783A1 (en) | 2020-06-26 | 2022-10-26 | QMake-GenPofAdd : Further generalised device & structures to be managed on objectives of promoting common data interfaces including standard digital and learning device interfaces where advanced technologies are digitally implemented or encompassing systems are confirmed using techniques that include block chain and or DETPAT techniques including in natural and non-commercial systems of control and decision nmaking and including where these may be implemented as standalone or distributed models. |
AU2023202418A Pending AU2023202418A1 (en) | 2020-06-26 | 2023-04-20 | IntegTechPLUS - Further extensions for distributed block chain systems enabling systemizing and processing of distributed components including with temporal, spatial and or ordinally displaced inputs including data and stream aggregations in fast and closely coupled architectures (especially but not limited to quantum processors and AI learning & de/constructions) with : process model, standard interface needs for implementing and managing various types of processing pipelines with block chain architecture able to also coordinate authenticable calculations. |
AU2024200385A Pending AU2024200385A1 (en) | 2020-06-26 | 2024-01-20 | DevBauxSoak devices for data model systemizations using block chain controls including optional inoperability by containment of the device in its own container as well as included infrastructure supporting useful auxiliary block chain functions. |
Family Applications Before (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2021103348A Ceased AU2021103348A4 (en) | 2020-06-26 | 2021-06-15 | BAC: BlackBoxAuthority Coordinator is a device and systems for implementing it to coordinate multiple other IoT devices. It is suitable for a wide variety of situations and applications. |
AU2022201926A Abandoned AU2022201926A1 (en) | 2020-06-26 | 2022-03-21 | BlackBoxAuthorityCoordinator (BAC) is a device able to receive IoT messages from diverse sources and when installed can be programmed to initiate safe operation of equipment and can provide audit of changes to its control of operations and all subscriptions to its authority which as needed are stored locally, copied and/or uploaded to a secure backup location via messages or other protocols with provision for authentication and where necessary timed persistence of received instruction messages and/or position and status information. |
AU2022215299A Pending AU2022215299A1 (en) | 2020-06-26 | 2022-08-12 | AuMake: Systems, devices & methods for monitoring, accepting input/declarations and/or assisting, controlling, collecting data or performing curated processes with accountability and permission-driven data security : configured in internal systems, or piggybacking external processing via block chained collaborative modules of education, therapy, conditional processing (eg registering, tracing), monitoring, treating, alerting and other configured services or regulatory processing in diverse settings including use in at-home care. |
AU2022259783A Pending AU2022259783A1 (en) | 2020-06-26 | 2022-10-26 | QMake-GenPofAdd : Further generalised device & structures to be managed on objectives of promoting common data interfaces including standard digital and learning device interfaces where advanced technologies are digitally implemented or encompassing systems are confirmed using techniques that include block chain and or DETPAT techniques including in natural and non-commercial systems of control and decision nmaking and including where these may be implemented as standalone or distributed models. |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
AU2024200385A Pending AU2024200385A1 (en) | 2020-06-26 | 2024-01-20 | DevBauxSoak devices for data model systemizations using block chain controls including optional inoperability by containment of the device in its own container as well as included infrastructure supporting useful auxiliary block chain functions. |
Country Status (1)
Country | Link |
---|---|
AU (6) | AU2021103348A4 (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117424838B (en) * | 2023-10-31 | 2024-06-18 | 北京中瑞浩航科技有限公司 | Self-learning detection method for Internet of things equipment |
-
2021
- 2021-06-15 AU AU2021103348A patent/AU2021103348A4/en not_active Ceased
-
2022
- 2022-03-21 AU AU2022201926A patent/AU2022201926A1/en not_active Abandoned
- 2022-08-12 AU AU2022215299A patent/AU2022215299A1/en active Pending
- 2022-10-26 AU AU2022259783A patent/AU2022259783A1/en active Pending
-
2023
- 2023-04-20 AU AU2023202418A patent/AU2023202418A1/en active Pending
-
2024
- 2024-01-20 AU AU2024200385A patent/AU2024200385A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
AU2021103348A4 (en) | 2021-08-19 |
AU2024200385A1 (en) | 2024-02-15 |
AU2022201926A1 (en) | 2022-04-14 |
AU2022259783A1 (en) | 2023-09-07 |
AU2022215299A1 (en) | 2022-09-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Raji et al. | Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing | |
Chou et al. | Continuous auditing with a multi-agent system | |
Gerdes | A participatory data-centric approach to AI ethics by design | |
AU2023202418A1 (en) | IntegTechPLUS - Further extensions for distributed block chain systems enabling systemizing and processing of distributed components including with temporal, spatial and or ordinally displaced inputs including data and stream aggregations in fast and closely coupled architectures (especially but not limited to quantum processors and AI learning & de/constructions) with : process model, standard interface needs for implementing and managing various types of processing pipelines with block chain architecture able to also coordinate authenticable calculations. | |
Hird et al. | New product development resource forecasting | |
Balasubramaniam et al. | Ethical guidelines for solving ethical issues and developing AI systems | |
AU2021261831A1 (en) | H4Z : Systems to identify, model, certify, verify and authentically counter-balance components of enterprises involving Scope 1, 2 and 3 emissions by direct association with products and processes in defined limited scenarios that will absorb or sink equivalent emissions &/or compensate for the negative climate effects of designated emissions. | |
Bakhtina et al. | Tool-supported method for privacy analysis of a business process model | |
Ahmad | AI-Enabled Spatial Intelligence: Revolutionizing Data Management and Decision Making in Geographic Information Systems | |
Guédria et al. | Extending the ontology of enterprise interoperability (ooei) using enterprise-as-system concepts | |
Whittington | Wiley CPAexcel Exam Review 2015 Study Guide (January): Business Environment and Concepts | |
Kaftannikov et al. | Problems of structuring risks and ensuring legal relations in IoT | |
Kamm et al. | Blueprints for Deploying Privacy Enhancing Technologies in E-Government | |
Čyras et al. | Formulating the enterprise architecture compliance problem | |
Bowne et al. | Implementing Responsible AI: Proposed Framework for Data Licensing | |
Moghaddasi | A Simulation Framework for Identity and Access Management Based on Internet of Things Architecture | |
Calvetti et al. | Human-data interaction in incremental digital twin construction | |
Sandström et al. | Digital Twins for sustainability in building operations-How a large commercial real estate firm can use digital twins to generate sustainable values in the O&M phase | |
Moody et al. | Early Life Cycle Cost Estimation: Fiscal Stewardship with Engineered Resilient Systems | |
Tzanakakis et al. | The Concept of Risk Management | |
Hakimi et al. | Integrating Blockchain Technology for Secure E-Government Services: Opportunities and Challenges | |
Krishnamoorthy | Enhancing Responsible AGI Development: Integrating Human-in-the-loop Approaches with Blockchain-based Smart Contracts | |
Chotib | Electronic Architecture Planning in Indonesian Trade (Inatrade) Portal | |
Balali | System-of-Systems Integration for Civil Infrastructures Resiliency Toward MultiHazard Events | |
CN118195320A (en) | Real estate registration information sharing management system and method based on big data technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
DA3 | Amendments made section 104 |
Free format text: THE NATURE OF THE AMENDMENT IS: AMEND THE DIVISIONAL PRIORITY DETAILS TO READ 2022201926 AND 2022203235 |