CN116579750A - RPA control data processing method and device based on artificial intelligence - Google Patents
RPA control data processing method and device based on artificial intelligence Download PDFInfo
- Publication number
- CN116579750A CN116579750A CN202310858154.3A CN202310858154A CN116579750A CN 116579750 A CN116579750 A CN 116579750A CN 202310858154 A CN202310858154 A CN 202310858154A CN 116579750 A CN116579750 A CN 116579750A
- Authority
- CN
- China
- Prior art keywords
- data
- target
- target data
- layer
- portrait
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 20
- 238000013473 artificial intelligence Methods 0.000 title claims abstract description 19
- 238000012545 processing Methods 0.000 claims abstract description 102
- 230000003993 interaction Effects 0.000 claims abstract description 98
- 238000000034 method Methods 0.000 claims abstract description 23
- 238000013528 artificial neural network Methods 0.000 claims description 17
- 238000013527 convolutional neural network Methods 0.000 claims description 17
- 238000006243 chemical reaction Methods 0.000 claims description 12
- 238000007781 pre-processing Methods 0.000 claims description 11
- 238000003062 neural network model Methods 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 6
- 238000004801 process automation Methods 0.000 claims description 6
- 230000009467 reduction Effects 0.000 claims description 6
- 230000009466 transformation Effects 0.000 claims description 6
- 230000006870 function Effects 0.000 description 7
- 239000000284 extract Substances 0.000 description 6
- 238000007726 management method Methods 0.000 description 6
- 238000004590 computer program Methods 0.000 description 4
- 239000003795 chemical substances by application Substances 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 208000014633 Retinitis punctata albescens Diseases 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000008846 dynamic interplay Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000008571 general function Effects 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/10—Office automation; Time management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/25—Integrating or interfacing systems involving database management systems
- G06F16/258—Data format conversion from or to a database
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/26—Visual data mining; Browsing structured data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/40—Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
- G06F16/48—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/483—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/02—Feature extraction for speech recognition; Selection of recognition unit
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L15/14—Speech classification or search using statistical models, e.g. Hidden Markov Models [HMMs]
- G10L15/142—Hidden Markov Models [HMMs]
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Abstract
The application provides an RPA control data processing method and device based on artificial intelligence. According to the method, a target data processing instruction input by a target user is acquired at an interaction layer, a control layer responds to the target data processing instruction, target data to be processed is acquired from a data layer according to a target feature field, the control layer carries out data processing on first target data and second target data according to an operation instruction to generate result target data, the interaction layer displays the result target data, and therefore the result target data responding to the voice instruction are acquired from databases stored with different data types in a voice instruction mode and are automatically displayed.
Description
Technical Field
The application relates to a data processing technology, in particular to an RPA control data processing method and device based on artificial intelligence.
Background
RPA (Robotic Process Automation) is an automated tool that simulates manual operations to perform repetitive, low-value tasks. The software robot is used for executing the tasks, so that the working efficiency can be greatly improved, the error rate of repeated work can be reduced, and the time and the cost can be saved.
In addition, RPA has been widely used in the fields of data management, data detection, data analysis, etc. of various industries. However, databases managed by existing RPAs are often based on highly structured data. For application scenarios with different data sources, there is a need for an RPA control data processing method that can implement data interaction in the scenario.
Disclosure of Invention
The application provides an RPA control data processing method and device based on artificial intelligence, which are used for solving the problem of how to realize data interaction through RPA under application scenes with different data sources.
In a first aspect, the present application provides an RPA control data processing method based on artificial intelligence, applied to a robot flow automation RPA device, where the RPA device includes: the data processing device comprises an interaction layer, a control layer and a data layer, wherein the control layer is used for writing data input and displayed on the interaction layer into the data layer, and the method comprises the following steps:
acquiring a target data processing instruction input by a target user at the interaction layer, wherein the target data processing instruction comprises a target feature field and an operation instruction, and the target feature field is used for representing keywords in a voice instruction input by the target user;
The control layer responds to the target data processing instruction and acquires target data to be processed from the data layer according to the target characteristic field, wherein the target data to be processed comprises first target data and second target data, and the data types of the first target data and the second target data are different;
the control layer performs data processing on the first target data and the second target data according to the operation instruction to generate result target data, wherein the data processing comprises at least one of data merging, data comparing and data reorganizing;
and the interaction layer displays the result target data.
Optionally, the acquiring, at the interaction layer, a target data processing instruction input by a target user includes:
acquiring the voice instruction input by the target user at the interaction layer, wherein the voice instruction is used for indicating to display target characteristics of the target user;
preprocessing the voice instruction to form a preprocessed voice signal, wherein the preprocessing comprises signal noise reduction processing based on wavelet transformation and signal enhancement processing based on frequency domain filtering;
Inputting the preprocessed voice signals into a preset hidden Markov model to extract an original keyword set in the preprocessed voice signals, wherein the original keyword set comprises a plurality of keywords;
ranking each keyword in the original keyword set according to the original keyword set and a preset keyword list to generate a ranked keyword set, wherein the preset keyword list is used for establishing a mapping relation between keywords and weight scores;
and determining the target feature field according to the sorted keyword set and a weight sub-threshold, wherein the weight sub-threshold is associated with the distribution condition of the weight sub-of each keyword in the sorted keyword set.
Optionally, the determining the target feature field according to the ranked keyword set and the weight subthreshold value includes:
according to the ordered keyword setWeight score set composed of weight scores of respective keywords +.>And equation 1 determines the weight sub-threshold +.>The formula 1 is:
wherein ,for the number of keywords in the ordered keyword set, +.>For the weight subset- >Maximum value of>For the weight subset->Minimum value of->For the ordered keyword set +.>The i-th keyword->The corresponding weight score;
from the weight subsetWherein the determined weight score is greater than the weight score thresholdValue->The key word corresponding to the element of (2) is used as the target characteristic field.
Optionally, the obtaining the target data to be processed from the data layer according to the target feature field includes:
determining a time range corresponding to the target data to be processed from an index list of the data layer according to the target characteristic field and target identity information corresponding to the target user;
determining the first target data according to the target feature field and a first time range, wherein the first target data is structured data suitable for direct display of the interaction layer;
determining the second target data according to the target feature field and a second time range, wherein the second target data is reserved image data, and the first time range and the second time range form the time range;
correspondingly, the control layer performs data processing on the first target data and the second target data according to the operation instruction to generate result target data, which includes:
Performing image recognition on the second target data to generate second target recognition data, and converting the second target recognition data into the structured data to generate second target conversion data;
and extracting target data corresponding to the target feature field in the first target data and the second target conversion data according to time sequence distribution of the first time range and the second time range, and generating the result target data.
Optionally, the control layer is connected with an external camera, and correspondingly, before the interaction layer displays the result target data, the method further includes:
acquiring first portrait data in the first target data and second portrait data in the second target data;
determining a first cosine distance between a first feature vector corresponding to the first portrait data and a second feature vector corresponding to the second portrait data according to a preset convolutional neural network model, wherein the preset convolutional neural network model is a neural network model established based on FaceNet;
if the first cosine distance is smaller than a preset first distance threshold, extracting a first portrait contour from the first portrait data according to a preset depth neural network and extracting a second portrait contour from the second portrait data according to the preset depth neural network, wherein the preset depth neural network is a neural network model established based on U-Net;
Comparing the total number of the first pixels corresponding to the first portrait outline with the total number of the second pixels corresponding to the second portrait outline, and determining portrait data corresponding to the portrait outline with more total number of pixels as comparison portrait data;
the control layer controls the external camera to acquire target user portrait data, and determines a second cosine distance between a third feature vector corresponding to the comparison portrait data and a fourth feature vector corresponding to the target user portrait data according to the preset convolutional neural network model;
and determining that the second cosine distance is smaller than a preset second distance threshold, wherein the preset second distance threshold is smaller than the preset first distance threshold.
Optionally, after the interaction layer displays the result target data, the method further includes:
the control layer writes the result target data into the data layer, and marks the result target data in the data layer by utilizing the target characteristic field;
in a preset time range, when the interaction layer acquires another target data processing instruction input by the target user, determining the feature similarity of the target feature field and another target feature field, wherein the target data processing instruction comprises the other target feature field, and the other target feature field is used for representing keywords in another voice instruction input by the target user;
And if the feature similarity is greater than a preset similarity threshold, displaying prompt information on the interaction layer, wherein the prompt information is used for indicating the target user to select to directly display previous data or display update data, the previous data comprises the result target data and the generation time corresponding to the result target data, and the update data comprises the update target data.
Optionally, if the target user selects to directly display the previous data, displaying the result target data on an interaction layer;
if the target user selects to display updated data, the control layer responds to the other target data processing instruction and acquires other target data to be processed from the data layer according to the other target characteristic field, wherein the other target data to be processed comprises third target data and fourth target data, and the data types of the third target data and the fourth target data are different; the control layer performs data processing on the third target data and the fourth target data according to another operation instruction in the another target data processing instruction so as to generate result updating data; and the interaction layer displays the result updating data.
In a second aspect, the present application provides a robotic process automation device, comprising: the system comprises an interaction layer, a control layer and a data layer, wherein the control layer is used for writing data input and displayed on the interaction layer into the data layer;
acquiring a target data processing instruction input by a target user at the interaction layer, wherein the target data processing instruction comprises a target feature field and an operation instruction, and the target feature field is used for representing keywords in a voice instruction input by the target user;
the control layer responds to the target data processing instruction and acquires target data to be processed from the data layer according to the target characteristic field, wherein the target data to be processed comprises first target data and second target data, and the data types of the first target data and the second target data are different;
the control layer performs data processing on the first target data and the second target data according to the operation instruction to generate result target data, wherein the data processing comprises at least one of data merging, data comparing and data reorganizing;
and the interaction layer displays the result target data.
Optionally, the voice instruction input by the target user is obtained at the interaction layer, and the voice instruction is used for indicating to display the target characteristics of the target user;
the interaction layer pre-processes the voice command to form a pre-processed voice signal, wherein the pre-processing comprises signal noise reduction processing based on wavelet transformation and signal enhancement processing based on frequency domain filtering;
the interaction layer inputs the preprocessed voice signals into a preset hidden Markov model to extract an original keyword set in the preprocessed voice signals, wherein the original keyword set comprises a plurality of keywords;
the interaction layer sorts all keywords in the original keyword set according to the original keyword set and a preset keyword list to generate a sorted keyword set, wherein the preset keyword list is used for establishing a mapping relation between keywords and weight scores;
and the interaction layer determines the target feature field according to the ordered keyword set and a weight sub-threshold, wherein the weight sub-threshold is associated with the distribution condition of the weight components of each keyword in the ordered keyword set.
Optionally, the interaction layer is configured to select the keyword set according to the rankingWeight score set composed of weight scores of respective keywords +.>Equation 1 determines theWeight sub-threshold->The formula 1 is:
wherein ,for the number of keywords in the ordered keyword set, +.>For the weight subset->Maximum value of>For the weight subset->Minimum value of->For the ordered keyword set +.>The i-th keyword->The corresponding weight score;
the interaction layer is arranged from the weight sub-setWherein the determined weight score is greater than said weight score threshold +.>The key word corresponding to the element of (2) is used as the target characteristic field.
Optionally, the control layer determines a time range corresponding to the target data to be processed from an index list of the data layer according to the target feature field and target identity information corresponding to the target user;
the control layer determines the first target data according to the target feature field and a first time range, wherein the first target data is structured data suitable for direct display of the interaction layer;
the control layer determines the second target data according to the target feature field and a second time range, wherein the second target data is reserved image data, and the first time range and the second time range form the time range;
Correspondingly, the control layer performs image recognition on the second target data to generate second target recognition data, and converts the second target recognition data into the structured data to generate second target conversion data;
and the control layer extracts target data corresponding to the target characteristic field in the first target data and the second target conversion data according to time sequence distribution of the first time range and the second time range, and generates the result target data.
Optionally, the control layer is connected with an external camera, and correspondingly, the control layer acquires first portrait data in the first target data and second portrait data in the second target data;
the control layer determines a first cosine distance between a first feature vector corresponding to the first portrait data and a second feature vector corresponding to the second portrait data according to a preset convolutional neural network model, wherein the preset convolutional neural network model is a neural network model established based on FaceNet;
if the first cosine distance is smaller than a preset first distance threshold, the control layer extracts a first portrait contour from the first portrait data according to a preset depth neural network and extracts a second portrait contour from the second portrait data according to the preset depth neural network, wherein the preset depth neural network is a neural network model established based on U-Net;
The control layer compares the total number of first pixels corresponding to the first portrait outline with the total number of second pixels corresponding to the second portrait outline, and determines portrait data corresponding to the portrait outline with more total number of pixels as comparison portrait data;
the control layer controls the external camera to acquire target user portrait data, and determines a second cosine distance between a third feature vector corresponding to the comparison portrait data and a fourth feature vector corresponding to the target user portrait data according to the preset convolutional neural network model;
the control layer determines that the second cosine distance is less than a preset second distance threshold, wherein the preset second distance threshold is less than the preset first distance threshold.
Optionally, the control layer writes the result target data into the data layer, and identifies the result target data in the data layer by using the target feature field;
in a preset time range, when the interaction layer acquires another target data processing instruction input by the target user, determining the feature similarity of the target feature field and another target feature field, wherein the target data processing instruction comprises the other target feature field, and the other target feature field is used for representing keywords in another voice instruction input by the target user;
And if the feature similarity is greater than a preset similarity threshold, displaying prompt information on the interaction layer, wherein the prompt information is used for indicating the target user to select to directly display previous data or display update data, the previous data comprises the result target data and the generation time corresponding to the result target data, and the update data comprises the update target data.
Optionally, if the target user selects to directly display the previous data, displaying the result target data on an interaction layer;
if the target user selects to display updated data, the control layer responds to the other target data processing instruction and acquires other target data to be processed from the data layer according to the other target characteristic field, wherein the other target data to be processed comprises third target data and fourth target data, and the data types of the third target data and the fourth target data are different; the control layer performs data processing on the third target data and the fourth target data according to another operation instruction in the another target data processing instruction so as to generate result updating data; and the interaction layer displays the result updating data.
In a third aspect, the present application provides an electronic device comprising:
a processor; the method comprises the steps of,
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform any one of the possible methods described in the first aspect via execution of the executable instructions.
In a fourth aspect, the present application provides a computer readable storage medium having stored therein computer executable instructions which when executed by a processor are adapted to carry out any one of the possible methods described in the first aspect.
According to the RPA control data processing method and device based on artificial intelligence, the target data processing instruction input by a target user is acquired in the interaction layer, the control layer responds to the target data processing instruction, the target data to be processed is acquired from the data layer according to the target feature field, the control layer carries out data processing on the first target data and the second target data according to the operation instruction to generate result target data, the interaction layer displays the result target data, and therefore the result target data responding to the voice instruction is acquired from databases storing different data types in a voice instruction mode and is automatically displayed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application.
FIG. 1 is a flow chart of an artificial intelligence based RPA control data processing method according to an example embodiment of the application;
FIG. 2 is a flow chart of an artificial intelligence based RPA control data processing method according to another example embodiment of the application;
FIG. 3 is a schematic diagram of a robotic process automation device according to an example embodiment of the application;
fig. 4 is a schematic structural view of an electronic device according to an exemplary embodiment of the present application.
Specific embodiments of the present application have been shown by way of the above drawings and will be described in more detail below. The drawings and the written description are not intended to limit the scope of the inventive concepts in any way, but rather to illustrate the inventive concepts to those skilled in the art by reference to the specific embodiments.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of apparatus and methods consistent with aspects of the application as detailed in the accompanying claims.
The application provides an artificial intelligence-based RPA control data processing method, which can be applied to a robot process automation (Robotic Process Automation, RPA) device, wherein the RPA device comprises: the data processing system comprises an interaction layer, a control layer and a data layer, wherein the control layer is used for writing data input and display on the interaction layer into the data layer, the layers are independent and do not affect each other, and a communication mechanism is used for data interaction between the layers.
Optionally, the interaction layer corresponds to an external window of the RPA device, in this embodiment, an electron+vue may be used as a front-end page development framework, an HTML and CSS language are used to design a front-end static page, a JavaScript language is used to implement a dynamic interaction function of front-end and back-end data, and the front-end and back-end use axios to request communication.
The control layer is used as a bridge for interaction between the front end and the service logic layer and is responsible for distributing service requests, in this embodiment, spring Boot is used as a back end development framework, @ RestController annotation is used for declaring control classes, and a dispatcherServelet controller is used for responding and processing different service requests distributed to corresponding control classes. Because the RPA device has a plurality of services and the processing logic of each service is different, the service logic layer realizes the service logic of each functional module of the RPA device, including the read-write function of table data, the intercepting and saving function of pictures, the screen throwing function of android equipment and the automatic executing function of flow. The service base layer is used for packaging the general functions of the RPA device, including the uploading and downloading functions of the file, and by the method, the coding efficiency is improved, and the portability and reusability of codes are improved.
The function of the data layer is to access and store data. The data access layer uses Mybatis frame to realize the adding, deleting and checking operation of database table data, the data storage layer uses MySQL to realize the persistent storage operation of data, and the sources of data comprise MySQL and OS table files.
FIG. 1 is a flow chart illustrating an artificial intelligence based RPA control data processing method according to an example embodiment of the application. As shown in fig. 1, the method provided in this embodiment includes:
s101, acquiring a target data processing instruction input by a target user at an interaction layer.
In this step, a target data processing instruction input by a target user is acquired at an interaction layer, where the target data processing instruction includes a target feature field and an operation instruction, and the target feature field is used to characterize a keyword in a voice instruction input by the target user.
The acquiring, at the interaction layer, the target data processing instruction input by the target user may specifically include:
acquiring a voice instruction input by a target user at an interaction layer, wherein the voice instruction is used for indicating to display target characteristics of the target user; preprocessing the voice command to form a preprocessed voice signal, wherein the preprocessing comprises signal noise reduction processing based on wavelet transformation and signal enhancement processing based on frequency domain filtering; inputting the preprocessed voice signals into a preset hidden Markov model to extract an original keyword set in the preprocessed voice signals, wherein the original keyword set comprises a plurality of keywords; sorting all keywords in the original keyword set according to the original keyword set and a preset keyword list to generate a sorted keyword set, wherein the preset keyword list is used for establishing a mapping relation between keywords and weight scores; and determining a target feature field according to the sorted keyword set and a weight division threshold, wherein the weight division threshold is associated with the distribution condition of the weight division of each keyword in the sorted keyword set. It should be appreciated that the above-mentioned preset keyword list may be preset according to an actual scene applied by the method disclosed in the present embodiment, for example, may be applied to a campus student information management field, a labor agent social security information management field, a book information management field, etc., and in different fields, the preset keyword list may be configured individually according to characteristics of the field, so that it may meet actual application requirements under the scene.
The determining the target feature field according to the sorted keyword set and the weight sub-threshold may specifically include:
according to the sorted keyword setWeight score set composed of weight scores of respective keywords +.>Equation 1 determines the weight threshold ++>Equation 1 is:
wherein ,for the number of keywords in the ordered keyword set, +.>For weight point set->Is selected from the group consisting of a maximum value of (c),for weight point set->Minimum value of->For ordered keyword set +.>The i-th keyword->The corresponding weight score; from the weight point set->Wherein it is determined that the weight score is greater than the weight score threshold +.>The key word corresponding to the element of (c) is used as the target feature field.
S102, the control layer responds to the target data processing instruction, and obtains target data to be processed from the data layer according to the target feature field.
In this step, the control layer responds to the target data processing instruction, and obtains target data to be processed from the data layer according to the target feature field, where the target data to be processed includes first target data and second target data, and the data types of the first target data and the second target data are different.
It can be understood that the voice command input by the target user at the interaction layer may be "please call and display all new personal information of my school", the corresponding determined target feature field may be new and personal information, and the corresponding operation command may be a command for reorganizing data of all new personal information according to the template of the school. In addition, the first target data and the second target data obtained correspondingly below can be stored new personal information, some are structured data obtained through an electronic input way, and some are image data uploaded through form filling and rescanning.
In addition, the voice command input by the target user at the interaction layer can be a command of calling and displaying all social security related application records of the person, the corresponding determined target feature field can be the social security and application records, and the corresponding operation command can be a command of merging data of all social security related application records of the person according to the displayed template. In addition, for the first target data and the second target data obtained correspondingly, the corresponding data may be structured data obtained by an electronic entry way when the social security related application is performed, and some data may be image data uploaded by form filling and rescanning.
Specifically, the obtaining the target data to be processed from the data layer according to the target feature field may include: determining a time range corresponding to target data to be processed from an index list of a data layer according to the target characteristic field and target identity information corresponding to a target user; determining first target data according to the target feature field and a first time range, wherein the first target data is structured data suitable for direct display of an interaction layer; and determining second target data according to the target feature field and a second time range, wherein the second target data is reserved image data, and the first time range and the second time range form a time range.
And S103, the control layer performs data processing on the first target data and the second target data according to the operation instruction so as to generate result target data.
The control layer performs data processing on the first target data and the second target data according to the operation instruction to generate result target data, wherein the data processing comprises at least one of data merging, data comparing and data reorganizing.
Specifically, the control layer performs data processing on the first target data and the second target data according to the operation instruction to generate result target data, which may include: performing image recognition on the second target data to generate second target recognition data, and converting the second target recognition data into structured data to generate second target conversion data; and extracting target data corresponding to the target characteristic field in the first target data and the second target conversion data according to time sequence distribution of the first time range and the second time range, and generating result target data.
And S104, displaying the result target data by the interaction layer.
After the control layer performs data processing on the first target data and the second target data according to the operation instruction to generate result target data, the result target data can be displayed in the interaction layer.
In this embodiment, a target data processing instruction input by a target user is acquired at an interaction layer, a control layer responds to the target data processing instruction and acquires target data to be processed from a data layer according to a target feature field, the control layer performs data processing on first target data and second target data according to an operation instruction to generate result target data, and the interaction layer displays the result target data, so that the result target data responding to the voice instruction is acquired from databases storing different data types in a voice instruction mode, and the result target data is automatically displayed.
FIG. 2 is a flow chart of an artificial intelligence based RPA control data processing method according to another example embodiment of the application. As shown in fig. 2, the method provided in this embodiment includes:
s201, acquiring a target data processing instruction input by a target user at an interaction layer.
In this step, a target data processing instruction input by a target user is acquired at an interaction layer, where the target data processing instruction includes a target feature field and an operation instruction, and the target feature field is used to characterize a keyword in a voice instruction input by the target user.
The acquiring, at the interaction layer, the target data processing instruction input by the target user may specifically include:
acquiring a voice instruction input by a target user at an interaction layer, wherein the voice instruction is used for indicating to display target characteristics of the target user; preprocessing the voice command to form a preprocessed voice signal, wherein the preprocessing comprises signal noise reduction processing based on wavelet transformation and signal enhancement processing based on frequency domain filtering; inputting the preprocessed voice signals into a preset hidden Markov model to extract an original keyword set in the preprocessed voice signals, wherein the original keyword set comprises a plurality of keywords; sorting all keywords in the original keyword set according to the original keyword set and a preset keyword list to generate a sorted keyword set, wherein the preset keyword list is used for establishing a mapping relation between keywords and weight scores; and determining a target feature field according to the sorted keyword set and a weight division threshold, wherein the weight division threshold is associated with the distribution condition of the weight division of each keyword in the sorted keyword set. It should be appreciated that the above-mentioned preset keyword list may be preset according to an actual scene applied by the method disclosed in the present embodiment, for example, may be applied to a campus student information management field, a labor agent social security information management field, a book information management field, etc., and in different fields, the preset keyword list may be configured individually according to characteristics of the field, so that it may meet actual application requirements under the scene.
The determining the target feature field according to the sorted keyword set and the weight sub-threshold may specifically include:
according to rowsPost-sequence keyword setWeight score set composed of weight scores of respective keywords +.>Equation 1 determines the weight threshold ++>Equation 1 is:
wherein ,for the number of keywords in the ordered keyword set, +.>For weight point set->Is selected from the group consisting of a maximum value of (c),for weight point set->Minimum value of->For ordered keyword set +.>The i-th keyword->The corresponding weight score; from the weight point set->Wherein it is determined that the weight score is greater than the weight score threshold +.>The key word corresponding to the element of (c) is used as the target feature field.
S202, the control layer responds to the target data processing instruction, and obtains target data to be processed from the data layer according to the target feature field.
In this step, the control layer responds to the target data processing instruction, and obtains target data to be processed from the data layer according to the target feature field, where the target data to be processed includes first target data and second target data, and the data types of the first target data and the second target data are different.
It can be understood that the voice command input by the target user at the interaction layer may be "please call and display all new personal information of my school", the corresponding determined target feature field may be new and personal information, and the corresponding operation command may be a command for reorganizing data of all new personal information according to the template of the school. In addition, the first target data and the second target data obtained correspondingly below can be stored new personal information, some are structured data obtained through an electronic input way, and some are image data uploaded through form filling and rescanning.
In addition, the voice command input by the target user at the interaction layer can be a command of calling and displaying all social security related application records of the person, the corresponding determined target feature field can be the social security and application records, and the corresponding operation command can be a command of merging data of all social security related application records of the person according to the displayed template. In addition, for the first target data and the second target data obtained correspondingly, the corresponding data may be structured data obtained by an electronic entry way when the social security related application is performed, and some data may be image data uploaded by form filling and rescanning.
Specifically, the obtaining the target data to be processed from the data layer according to the target feature field may include: determining a time range corresponding to target data to be processed from an index list of a data layer according to the target characteristic field and target identity information corresponding to a target user; determining first target data according to the target feature field and a first time range, wherein the first target data is structured data suitable for direct display of an interaction layer; and determining second target data according to the target feature field and a second time range, wherein the second target data is reserved image data, and the first time range and the second time range form a time range.
And S203, the control layer performs data processing on the first target data and the second target data according to the operation instruction so as to generate result target data.
The control layer performs data processing on the first target data and the second target data according to the operation instruction to generate result target data, wherein the data processing comprises at least one of data merging, data comparing and data reorganizing.
Specifically, the control layer performs data processing on the first target data and the second target data according to the operation instruction to generate result target data, which may include: performing image recognition on the second target data to generate second target recognition data, and converting the second target recognition data into structured data to generate second target conversion data; and extracting target data corresponding to the target characteristic field in the first target data and the second target conversion data according to time sequence distribution of the first time range and the second time range, and generating result target data.
S204, acquiring first portrait data in the first target data and second portrait data in the second target data.
In this step, in order to prevent errors in stored data, for example, to prevent social security application related records, application data of different employees of the same unit are mixed. Therefore, the first image data in the first target data and the second image data in the second target data can be acquired, so that whether the application data of the same employee is determined by comparing the first image data in the first target data and the second image data in the second target data.
S205, determining a first cosine distance between a first feature vector corresponding to the first portrait data and a second feature vector corresponding to the second portrait data according to a preset convolutional neural network model.
And determining a first cosine distance between a first feature vector corresponding to the first portrait data and a second feature vector corresponding to the second portrait data according to a preset convolutional neural network model, wherein the preset convolutional neural network model is a neural network model established based on FaceNet.
S206, extracting a first portrait outline from the first portrait data according to the preset depth neural network, and extracting a second portrait outline from the second portrait data according to the preset depth neural network.
If the first cosine distance is smaller than a preset first distance threshold, a first portrait contour is extracted from the first portrait data according to a preset depth neural network, and a second portrait contour is extracted from the second portrait data according to the preset depth neural network, wherein the preset depth neural network is a neural network model established based on U-Net.
S207, comparing the total number of the first pixels corresponding to the first portrait outline with the total number of the second pixels corresponding to the second portrait outline, and determining the portrait data corresponding to the portrait outline with more total number of pixels as the comparison portrait data.
In this step, the total number of the first pixels corresponding to the first portrait outline is compared with the total number of the second pixels corresponding to the second portrait outline, and the portrait data corresponding to the portrait outline with more total number of pixels is determined as the comparison portrait data. It is worth to describe, confirm the portrait data that the portrait profile corresponding to more total number of pixel is regarded as the portrait data of contrast, its purpose is to choose the portrait data that the pixel is most from a plurality of portrait profiles as the object that follow-up and present goal user portrait compares, thus improve the accuracy rate of comparison.
And S208, the control layer controls the external camera to acquire target user portrait data, and determines a second cosine distance between a third feature vector corresponding to the contrast portrait data and a fourth feature vector corresponding to the target user portrait data according to a preset convolutional neural network model.
The control layer controls the external camera to acquire target user portrait data, and determines a second cosine distance between a third feature vector corresponding to the contrast portrait data and a fourth feature vector corresponding to the target user portrait data according to a preset convolutional neural network model.
S209, determining that the second cosine distance is smaller than a preset second distance threshold.
And determining that the second cosine distance is smaller than a preset second distance threshold, wherein the preset second distance threshold is smaller than the preset first distance threshold. It can be understood that, since the first distance threshold is only used for judging whether the first target data and the second target data belong to the same target user, and the second distance threshold is used for judging the authority of displaying the data, the preset second distance threshold can be set smaller than the preset first distance threshold, so that the accuracy requirements under different judging scenes can be ensured, the computing efficiency of judgment can be improved, and the running speed of the whole device is improved.
And S210, displaying the result target data by the interaction layer.
After the interaction layer displays the result target data, the control layer can also write the result target data into the data layer, and the result target data is identified in the data layer by utilizing the target feature field; in a preset time range, when the interaction layer acquires another target data processing instruction input by the target user, determining the feature similarity of the target feature field and the other target feature field, wherein the target data processing instruction comprises the other target feature field which is used for representing keywords in the other voice instruction input by the target user; if the feature similarity is greater than a preset similarity threshold, prompt information is displayed on an interaction layer, the prompt information is used for indicating a target user to select to directly display previous data or display update data, wherein the previous data comprises result target data and generation time corresponding to the result target data, and the update data comprises update target data.
If the target user selects to directly display the previous data, displaying the result target data on an interaction layer; if the target user selects to display the updated data, the control layer responds to another target data processing instruction, and acquires another target data to be processed from the data layer according to another target characteristic field, wherein the other target data to be processed comprises third target data and fourth target data, and the data types of the third target data and the fourth target data are different; the control layer performs data processing on the third target data and the fourth target data according to another operation instruction in the other target data processing instruction so as to generate result updating data; and the interaction layer displays the result updating data.
Fig. 3 is a schematic structural diagram of a robotic flow automation device according to an exemplary embodiment of the present application, where the device 300 provided in this embodiment includes:
an interaction layer 310, a control layer 320, and a data layer 330, where the control layer 320 is configured to write data input and displayed on the interaction layer 310 into the data layer 330;
acquiring a target data processing instruction input by a target user at the interaction layer 310, wherein the target data processing instruction comprises a target feature field and an operation instruction, and the target feature field is used for representing keywords in a voice instruction input by the target user;
The control layer 320 responds to the target data processing instruction and obtains target data to be processed from the data layer 330 according to the target feature field, wherein the target data to be processed comprises first target data and second target data, and the data types of the first target data and the second target data are different;
the control layer 320 performs data processing on the first target data and the second target data according to the operation instruction to generate resultant target data, where the data processing includes at least one of data merging, data comparing, and data reorganizing;
the interaction layer 310 displays the result target data.
Optionally, the voice instruction input by the target user is obtained at the interaction layer 310, where the voice instruction is used to instruct to display the target feature of the target user;
the interaction layer 310 performs preprocessing on the voice command to form a preprocessed voice signal, wherein the preprocessing comprises signal noise reduction processing based on wavelet transformation and signal enhancement processing based on frequency domain filtering;
the interaction layer 310 inputs the pre-processed voice signal into a preset hidden markov model to extract an original keyword set in the pre-processed voice signal, wherein the original keyword set comprises a plurality of keywords;
The interaction layer 310 sorts the keywords in the original keyword set according to the original keyword set and a preset keyword list, so as to generate a sorted keyword set, wherein the preset keyword list is used for establishing a mapping relation between the keywords and the weight scores;
the interaction layer 310 determines the target feature field according to the ranked keyword set and a weight sub-threshold, where the weight sub-threshold is associated with a distribution of weight components of each keyword in the ranked keyword set.
Optionally, the interaction layer 310 performs the interaction according to the ordered keyword setWeight score set composed of weight scores of respective keywords +.>And equation 1 determines the weight sub-threshold +.>The formula 1 is:
/>
wherein ,for the number of keywords in the ordered keyword set,/>for the weight subset->Maximum value of>For the weight subset->Minimum value of->For the ordered keyword set +.>The i-th keyword->The corresponding weight score;
the interaction layer 310 separates sets from the weightsWherein the determined weight score is greater than said weight score threshold +.>The key word corresponding to the element of (2) is used as the target characteristic field.
Optionally, the control layer 320 determines, from the index list of the data layer 330, a time range corresponding to the target data to be processed according to the target feature field and target identity information corresponding to the target user;
the control layer 320 determines the first target data according to the target feature field and a first time range, where the first target data is structured data suitable for the interaction layer 310 to directly display;
the control layer 320 determines the second target data according to the target feature field and a second time range, where the second target data is retention image data, and the first time range and the second time range form the time range;
correspondingly, the control layer 320 performs image recognition on the second target data to generate second target recognition data, and converts the second target recognition data into the structured data to generate second target conversion data;
the control layer 320 extracts the target data corresponding to the target feature field in the first target data and the second target conversion data according to the time sequence distribution of the first time range and the second time range, and generates the result target data.
Optionally, the control layer 320 is connected to an external camera, and correspondingly, the control layer 320 obtains first portrait data in the first target data and second portrait data in the second target data;
the control layer 320 determines a first cosine distance between a first feature vector corresponding to the first portrait data and a second feature vector corresponding to the second portrait data according to a preset convolutional neural network model, where the preset convolutional neural network model is a neural network model built based on FaceNet;
if the first cosine distance is smaller than a preset first distance threshold, the control layer 320 extracts a first portrait contour from the first portrait data according to a preset depth neural network, and extracts a second portrait contour from the second portrait data according to the preset depth neural network, wherein the preset depth neural network is a neural network model established based on U-Net;
the control layer 320 compares the total number of the first pixels corresponding to the first portrait outline with the total number of the second pixels corresponding to the second portrait outline, and determines the portrait data corresponding to the portrait outline with more total number of pixels as the comparison portrait data;
The control layer 320 controls the external camera to acquire target user portrait data, and determines a second cosine distance between a third feature vector corresponding to the comparison portrait data and a fourth feature vector corresponding to the target user portrait data according to the preset convolutional neural network model;
the control layer 320 determines that the second cosine distance is less than a preset second distance threshold, wherein the preset second distance threshold is less than the preset first distance threshold.
Optionally, the control layer 320 writes the result target data into the data layer 330, and identifies the result target data in the data layer 330 by using the target feature field;
determining the feature similarity between the target feature field and another target feature field when the interaction layer 310 acquires another target data processing instruction input by the target user within a preset time range, where the target data processing instruction includes the another target feature field, and the another target feature field is used for characterizing a keyword in another voice instruction input by the target user;
if the feature similarity is greater than a preset similarity threshold, a prompt message is displayed on the interaction layer 310, where the prompt message is used to instruct the target user to select to directly display previous data or display update data, where the previous data includes the result target data and a generation time corresponding to the result target data, and the update data includes update target data.
Optionally, if the target user selects to directly display previous data, the resulting target data is displayed at an interaction layer 310;
if the target user selects to display updated data, the control layer 320 responds to the other target data processing instruction and obtains another target data to be processed from the data layer 330 according to the other target feature field, where the other target data to be processed includes third target data and fourth target data, and the data types of the third target data and the fourth target data are different; the control layer 320 performs data processing on the third target data and the fourth target data according to another operation instruction in the another target data processing instruction, so as to generate result update data; the interaction layer 310 displays the result update data.
Fig. 4 is a schematic structural view of an electronic device according to an exemplary embodiment of the present application. As shown in fig. 4, an electronic device 400 provided in this embodiment includes: a processor 401 and a memory 402; wherein:
a memory 402 for storing a computer program, which memory may also be a flash memory.
A processor 401 for executing the execution instructions stored in the memory to implement the steps in the above method. Reference may be made in particular to the description of the embodiments of the method described above.
Alternatively, the memory 402 may be separate or integrated with the processor 401.
When the memory 402 is a device separate from the processor 401, the electronic apparatus 400 may further include:
a bus 403 for connecting the memory 402 and the processor 401.
The present embodiment also provides a readable storage medium having a computer program stored therein, which when executed by at least one processor of an electronic device, performs the methods provided by the various embodiments described above.
The present embodiment also provides a program product comprising a computer program stored in a readable storage medium. The computer program may be read from a readable storage medium by at least one processor of an electronic device, and executed by the at least one processor, causes the electronic device to implement the methods provided by the various embodiments described above.
Other embodiments of the application will be apparent to those skilled in the art from consideration of the specification and practice of the application disclosed herein. This application is intended to cover any variations, uses, or adaptations of the application following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the application pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It is to be understood that the application is not limited to the precise arrangements and instrumentalities shown in the drawings, which have been described above, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the application is limited only by the appended claims.
Claims (10)
1. An artificial intelligence-based RPA control data processing method is characterized by being applied to a robot flow automation RPA device, wherein the RPA device comprises: the data processing device comprises an interaction layer, a control layer and a data layer, wherein the control layer is used for writing data input and displayed on the interaction layer into the data layer, and the method comprises the following steps:
acquiring a target data processing instruction input by a target user at the interaction layer, wherein the target data processing instruction comprises a target feature field and an operation instruction, and the target feature field is used for representing keywords in a voice instruction input by the target user;
the control layer responds to the target data processing instruction and acquires target data to be processed from the data layer according to the target characteristic field, wherein the target data to be processed comprises first target data and second target data, and the data types of the first target data and the second target data are different;
The control layer performs data processing on the first target data and the second target data according to the operation instruction to generate result target data, wherein the data processing comprises at least one of data merging, data comparing and data reorganizing;
and the interaction layer displays the result target data.
2. The RPA control data processing method based on artificial intelligence according to claim 1, wherein the obtaining, at the interaction layer, a target data processing instruction input by a target user includes:
acquiring the voice instruction input by the target user at the interaction layer, wherein the voice instruction is used for indicating to display target characteristics of the target user;
preprocessing the voice instruction to form a preprocessed voice signal, wherein the preprocessing comprises signal noise reduction processing based on wavelet transformation and signal enhancement processing based on frequency domain filtering;
inputting the preprocessed voice signals into a preset hidden Markov model to extract an original keyword set in the preprocessed voice signals, wherein the original keyword set comprises a plurality of keywords;
ranking each keyword in the original keyword set according to the original keyword set and a preset keyword list to generate a ranked keyword set, wherein the preset keyword list is used for establishing a mapping relation between keywords and weight scores;
And determining the target feature field according to the sorted keyword set and a weight sub-threshold, wherein the weight sub-threshold is associated with the distribution condition of the weight sub-of each keyword in the sorted keyword set.
3. The artificial intelligence based RPA control data processing method of claim 2, wherein the determining the target feature field from the ranked set of keywords and a weight thresholding comprises:
according to the ordered keyword setWeight score set composed of weight scores of respective keywords +.>And equation 1 determines the weight sub-threshold +.>The formula 1 is:
wherein ,for the number of keywords in the ordered keyword set, +.>For the weight subset->Maximum value of>For the weight subset->Minimum value of->For the ordered keyword set +.>The ith keyword in (3)The corresponding weight score;
from the weight subsetWherein the determined weight score is greater than said weight score threshold +.>The key word corresponding to the element of (2) is used as the target characteristic field.
4. An artificial intelligence based RPA control data processing method according to any one of claims 1-3, wherein said obtaining target data to be processed from said data layer according to said target feature field comprises:
Determining a time range corresponding to the target data to be processed from an index list of the data layer according to the target characteristic field and target identity information corresponding to the target user;
determining the first target data according to the target feature field and a first time range, wherein the first target data is structured data suitable for direct display of the interaction layer;
determining the second target data according to the target feature field and a second time range, wherein the second target data is reserved image data, and the first time range and the second time range form the time range;
correspondingly, the control layer performs data processing on the first target data and the second target data according to the operation instruction to generate result target data, which includes:
performing image recognition on the second target data to generate second target recognition data, and converting the second target recognition data into the structured data to generate second target conversion data;
and extracting target data corresponding to the target feature field in the first target data and the second target conversion data according to time sequence distribution of the first time range and the second time range, and generating the result target data.
5. The RPA control data processing method based on artificial intelligence according to claim 4, wherein the control layer is connected with an external camera, and correspondingly, before the interaction layer displays the result target data, the method further comprises:
acquiring first portrait data in the first target data and second portrait data in the second target data;
determining a first cosine distance between a first feature vector corresponding to the first portrait data and a second feature vector corresponding to the second portrait data according to a preset convolutional neural network model, wherein the preset convolutional neural network model is a neural network model established based on FaceNet;
if the first cosine distance is smaller than a preset first distance threshold, extracting a first portrait contour from the first portrait data according to a preset depth neural network and extracting a second portrait contour from the second portrait data according to the preset depth neural network, wherein the preset depth neural network is a neural network model established based on U-Net;
comparing the total number of the first pixels corresponding to the first portrait outline with the total number of the second pixels corresponding to the second portrait outline, and determining portrait data corresponding to the portrait outline with more total number of pixels as comparison portrait data;
The control layer controls the external camera to acquire target user portrait data, and determines a second cosine distance between a third feature vector corresponding to the comparison portrait data and a fourth feature vector corresponding to the target user portrait data according to the preset convolutional neural network model;
and determining that the second cosine distance is smaller than a preset second distance threshold, wherein the preset second distance threshold is smaller than the preset first distance threshold.
6. The artificial intelligence based RPA control data processing method according to claim 5, further comprising, after the interaction layer displays the result target data:
the control layer writes the result target data into the data layer, and marks the result target data in the data layer by utilizing the target characteristic field;
in a preset time range, when the interaction layer acquires another target data processing instruction input by the target user, determining the feature similarity of the target feature field and another target feature field, wherein the target data processing instruction comprises the other target feature field, and the other target feature field is used for representing keywords in another voice instruction input by the target user;
And if the feature similarity is greater than a preset similarity threshold, displaying prompt information on the interaction layer, wherein the prompt information is used for indicating the target user to select to directly display previous data or display update data, the previous data comprises the result target data and the generation time corresponding to the result target data, and the update data comprises the update target data.
7. The RPA control data processing method based on artificial intelligence of claim 6, wherein if the target user selects to directly display previous data, the resulting target data is displayed at an interaction layer;
if the target user selects to display updated data, the control layer responds to the other target data processing instruction and acquires other target data to be processed from the data layer according to the other target characteristic field, wherein the other target data to be processed comprises third target data and fourth target data, and the data types of the third target data and the fourth target data are different; the control layer performs data processing on the third target data and the fourth target data according to another operation instruction in the another target data processing instruction so as to generate result updating data; and the interaction layer displays the result updating data.
8. A robotic process automation device, comprising: the system comprises an interaction layer, a control layer and a data layer, wherein the control layer is used for writing data input and displayed on the interaction layer into the data layer;
acquiring a target data processing instruction input by a target user at the interaction layer, wherein the target data processing instruction comprises a target feature field and an operation instruction, and the target feature field is used for representing keywords in a voice instruction input by the target user;
the control layer responds to the target data processing instruction and acquires target data to be processed from the data layer according to the target characteristic field, wherein the target data to be processed comprises first target data and second target data, and the data types of the first target data and the second target data are different;
the control layer performs data processing on the first target data and the second target data according to the operation instruction to generate result target data, wherein the data processing comprises at least one of data merging, data comparing and data reorganizing;
and the interaction layer displays the result target data.
9. An electronic device, comprising:
a processor; the method comprises the steps of,
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any one of claims 1 to 7 via execution of the executable instructions.
10. A computer readable storage medium having stored therein computer executable instructions which when executed by a processor are adapted to carry out the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310858154.3A CN116579750B (en) | 2023-07-13 | 2023-07-13 | RPA control data processing method and device based on artificial intelligence |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310858154.3A CN116579750B (en) | 2023-07-13 | 2023-07-13 | RPA control data processing method and device based on artificial intelligence |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116579750A true CN116579750A (en) | 2023-08-11 |
CN116579750B CN116579750B (en) | 2023-09-12 |
Family
ID=87536405
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310858154.3A Active CN116579750B (en) | 2023-07-13 | 2023-07-13 | RPA control data processing method and device based on artificial intelligence |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116579750B (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112035630A (en) * | 2020-03-27 | 2020-12-04 | 北京来也网络科技有限公司 | Dialogue interaction method, device, equipment and storage medium combining RPA and AI |
CN113034095A (en) * | 2021-01-29 | 2021-06-25 | 北京来也网络科技有限公司 | Man-machine interaction method and device combining RPA and AI, storage medium and electronic equipment |
CN113591489A (en) * | 2021-07-30 | 2021-11-02 | 中国平安人寿保险股份有限公司 | Voice interaction method and device and related equipment |
CN114723551A (en) * | 2022-04-29 | 2022-07-08 | 中国建设银行股份有限公司 | Data processing method, device and equipment based on multiple data sources and storage medium |
CN115002099A (en) * | 2022-05-27 | 2022-09-02 | 北京来也网络科技有限公司 | Man-machine interactive file processing method and device for realizing IA (Internet of things) based on RPA (resilient packet Access) and AI (Artificial Intelligence) |
CN115454559A (en) * | 2022-10-17 | 2022-12-09 | 中银金融科技(苏州)有限公司 | RPA flow generation method, device, server and medium |
CN115794486A (en) * | 2022-11-14 | 2023-03-14 | 上海擎朗智能科技有限公司 | Robot information acquisition method, system, device and readable medium |
-
2023
- 2023-07-13 CN CN202310858154.3A patent/CN116579750B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112035630A (en) * | 2020-03-27 | 2020-12-04 | 北京来也网络科技有限公司 | Dialogue interaction method, device, equipment and storage medium combining RPA and AI |
CN113034095A (en) * | 2021-01-29 | 2021-06-25 | 北京来也网络科技有限公司 | Man-machine interaction method and device combining RPA and AI, storage medium and electronic equipment |
CN113591489A (en) * | 2021-07-30 | 2021-11-02 | 中国平安人寿保险股份有限公司 | Voice interaction method and device and related equipment |
CN114723551A (en) * | 2022-04-29 | 2022-07-08 | 中国建设银行股份有限公司 | Data processing method, device and equipment based on multiple data sources and storage medium |
CN115002099A (en) * | 2022-05-27 | 2022-09-02 | 北京来也网络科技有限公司 | Man-machine interactive file processing method and device for realizing IA (Internet of things) based on RPA (resilient packet Access) and AI (Artificial Intelligence) |
CN115454559A (en) * | 2022-10-17 | 2022-12-09 | 中银金融科技(苏州)有限公司 | RPA flow generation method, device, server and medium |
CN115794486A (en) * | 2022-11-14 | 2023-03-14 | 上海擎朗智能科技有限公司 | Robot information acquisition method, system, device and readable medium |
Also Published As
Publication number | Publication date |
---|---|
CN116579750B (en) | 2023-09-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109034069B (en) | Method and apparatus for generating information | |
CN114155543A (en) | Neural network training method, document image understanding method, device and equipment | |
CN112507806B (en) | Intelligent classroom information interaction method and device and electronic equipment | |
CN111199054B (en) | Data desensitization method and device and data desensitization equipment | |
US11238050B2 (en) | Method and apparatus for determining response for user input data, and medium | |
CN110807472B (en) | Image recognition method and device, electronic equipment and storage medium | |
CN112016502B (en) | Safety belt detection method, safety belt detection device, computer equipment and storage medium | |
CN111444313B (en) | Knowledge graph-based question and answer method, knowledge graph-based question and answer device, computer equipment and storage medium | |
CN112486338A (en) | Medical information processing method and device and electronic equipment | |
US20200241900A1 (en) | Automation tool | |
CN113591884B (en) | Method, device, equipment and storage medium for determining character recognition model | |
CN116579750B (en) | RPA control data processing method and device based on artificial intelligence | |
CN115221037A (en) | Interactive page testing method and device, computer equipment and program product | |
CN116383787A (en) | Page creation method, page creation device, computer equipment and storage medium | |
CN116957006A (en) | Training method, device, equipment, medium and program product of prediction model | |
CN111368889B (en) | Image processing method and device | |
CN113515280A (en) | Page code generation method and device | |
CN110955755A (en) | Method and system for determining target standard information | |
CN110688511A (en) | Fine-grained image retrieval method and device, computer equipment and storage medium | |
CN116309274B (en) | Method and device for detecting small target in image, computer equipment and storage medium | |
CN114490986B (en) | Computer-implemented data mining method, device, electronic equipment and storage medium | |
CN114969385B (en) | Knowledge graph optimization method and device based on document attribute assignment entity weight | |
CN114840700B (en) | Image retrieval method and device for realizing IA by combining RPA and AI and electronic equipment | |
CN112347738B (en) | Bidirectional encoder characterization quantity model optimization method and device based on referee document | |
CN115525804A (en) | Information query method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |