CN114783420A - Data processing method and system - Google Patents

Data processing method and system Download PDF

Info

Publication number
CN114783420A
CN114783420A CN202210709607.1A CN202210709607A CN114783420A CN 114783420 A CN114783420 A CN 114783420A CN 202210709607 A CN202210709607 A CN 202210709607A CN 114783420 A CN114783420 A CN 114783420A
Authority
CN
China
Prior art keywords
data
voice
processed
target
information management
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210709607.1A
Other languages
Chinese (zh)
Inventor
何超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Bodian Technology Co ltd
Original Assignee
Chengdu Bodian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Bodian Technology Co ltd filed Critical Chengdu Bodian Technology Co ltd
Priority to CN202210709607.1A priority Critical patent/CN114783420A/en
Publication of CN114783420A publication Critical patent/CN114783420A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3343Query execution using phonetics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/194Calculation of difference between files
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification
    • G10L17/02Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/60Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for measuring the quality of voice signals

Abstract

The invention discloses a data processing method and a system, wherein the data processing system comprises the following steps: the system comprises an information management unit, a data analysis unit and a work card. The worker card obtains voice data to be processed and sends the voice data to be processed to the information management unit; the information management unit determines voice feature data extracted from the voice data to be processed as target voice feature data, searches a target object identifier matched with the target voice feature data, and sends the data to be evaluated comprising the voice data to be processed and the target object identifier to the data analysis unit; and the data analysis unit determines an evaluation score corresponding to the data to be evaluated according to a predefined evaluation mode. The invention can effectively reduce the manufacturing quantity of the worker cards when the number of workers increases, thereby effectively reducing the consumption of manufacturing resources and improving the utilization efficiency of the worker cards.

Description

Data processing method and system
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a data processing method and system.
Background
The worker card is a device used for displaying information such as names in front of chest and worn by workers such as sales staff or customer service staff. Enterprises often require workers to wear the worker cards, conversation voices of the workers and clients are recorded by the worker cards, and the workers are evaluated and managed through the recorded voices.
Currently, workers may be in a one-to-one correspondence with the work cards worn by the workers. Specifically, before the worker wears the worker card for service, the identity information of the worker can be bound with the worker card.
However, when the number of workers is large, the number of the work cards required to be made is also increased, which results in large resource consumption for making the work cards.
Disclosure of Invention
In view of the above problems, the present invention provides a data processing method and system for overcoming the above problems or at least partially solving the above problems, the technical solution is as follows:
a data processing method is applied to a data processing system, and the data processing system comprises: the system comprises an information management unit, a data analysis unit and a work card; the data processing method comprises the following steps:
the workmanship board obtains voice data to be processed and sends the voice data to be processed to the information management unit;
the information management unit determines voice feature data extracted from the voice data to be processed as target voice feature data, finds out a target object identifier matched with the target voice feature data, and sends the data to be evaluated comprising the voice data to be processed and the target object identifier to the data analysis unit; the information management unit stores at least one piece of voiceprint information, and each piece of voiceprint information correspondingly comprises voice characteristic data and an object identifier;
and the data analysis unit determines an evaluation score corresponding to the data to be evaluated according to a predefined evaluation mode.
Optionally, when the to-be-processed voice data includes voice data input by a plurality of objects, the target voice feature data includes voice feature data of each object; the finding out the target object identifier matched with the target voice feature data comprises:
the information management unit finds out target voiceprint information containing voice characteristic data of the object from each voiceprint information;
the information management unit determines an object identifier in the target voiceprint information as the target object identifier.
Optionally, the sending the to-be-evaluated data including the to-be-processed voice data and the target object identifier to the data analysis unit includes:
the information management unit extracts target voice data matched with the voice feature data in the target voiceprint information from the voice data to be processed;
and the information management unit determines the target voice data and the target object identification as the data to be evaluated and sends the data to the data analysis unit.
Optionally, the data analysis unit includes: the system comprises a voice recognition unit, a keyword matching unit, a dialect matching unit and a scoring unit; the data analysis unit determines an evaluation score corresponding to the data to be evaluated according to a predefined scoring mode, and the evaluation score comprises the following steps:
the voice recognition unit converts the voice data to be processed into corresponding texts;
the keyword matching unit extracts predefined keywords from the text;
the dialect matching unit carries out similarity matching on the text and a predefined dialect text to obtain a first score;
the scoring unit determines the evaluation score based on the keyword and the first score.
Optionally, the card comprises: a recording unit and a communication unit; the card of workers obtains pending voice data, with pending voice data send to information management unit, include:
the recording unit obtains the voice data to be processed and sends the voice data to the communication unit;
and the communication unit sends the voice data to be processed to the information management unit.
Optionally, the data processing method further includes:
and the communication unit acquires the state data of the work cards and sends the state data to the information management unit.
Optionally, the data processing method further includes:
the workcards acquire workcard control instructions issued by the information management unit and carry out corresponding control based on the workcard control instructions.
A data processing system, said data processing system comprising: the system comprises an information management unit, a data analysis unit and a work card; wherein:
the worker card obtains voice data to be processed and sends the voice data to be processed to the information management unit;
the information management unit determines voice feature data extracted from the voice data to be processed as target voice feature data, finds out a target object identifier matched with the target voice feature data, and sends the data to be evaluated comprising the voice data to be processed and the target object identifier to the data analysis unit; the information management unit stores at least one piece of voiceprint information, and each piece of voiceprint information correspondingly comprises voice feature data and an object identifier;
and the data analysis unit determines an evaluation score corresponding to the data to be evaluated according to a predefined evaluation mode.
Optionally, when the to-be-processed voice data includes voice data input by a plurality of objects, the target voice feature data includes voice feature data of each object; the target object identification matched with the target voice characteristic data is found out and is set as follows:
the information management unit finds out target voiceprint information containing voice characteristic data of the object from each voiceprint information;
the information management unit determines an object identifier in the target voiceprint information as the target object identifier.
Optionally, the sending the to-be-evaluated data including the to-be-processed voice data and the target object identifier to the data analysis unit is configured to:
the information management unit extracts target voice data matched with the voice feature data in the target voiceprint information from the voice data to be processed;
and the information management unit determines the target voice data and the target object identification as the data to be evaluated and sends the data to the data analysis unit.
Optionally, the data analysis unit includes: the system comprises a voice recognition unit, a keyword matching unit, a dialect matching unit and a scoring unit; the data analysis unit determines an evaluation score corresponding to the data to be evaluated according to a predefined scoring mode, and the evaluation score is set as follows:
the voice recognition unit converts the voice data to be processed into corresponding texts;
the keyword matching unit extracts predefined keywords from the text;
the dialect matching unit is used for carrying out similarity matching on the text and a predefined dialect text to obtain a first score;
the scoring unit determines the evaluation score based on the keyword and the first score.
Optionally, the card comprises: a recording unit and a communication unit; the work card obtains voice data to be processed, sends the voice data to be processed to the information management unit, and sets the voice data to be processed to be:
the recording unit obtains the voice data to be processed and sends the voice data to the communication unit;
and the communication unit sends the voice data to be processed to the information management unit.
Optionally, the communication unit obtains the status data of the work card and sends the status data to the information management unit.
Optionally, the work card obtains the work card control instruction issued by the information management unit, and performs corresponding control based on the work card control instruction.
The invention provides a data processing method and a system, wherein the data processing system comprises: the system comprises an information management unit, a data analysis unit and a work card. The worker card obtains voice data to be processed and sends the voice data to be processed to the information management unit; the information management unit determines voice feature data extracted from the voice data to be processed as target voice feature data, searches a target object identifier matched with the target voice feature data, and sends the data to be evaluated comprising the voice data to be processed and the target object identifier to the data analysis unit; the information management unit stores at least one piece of voiceprint information, and each piece of voiceprint information correspondingly comprises voice characteristic data and an object identifier; and the data analysis unit determines an evaluation score corresponding to the data to be evaluated according to a predefined evaluation mode. The method and the device can realize the evaluation of the voice data of the workers without binding the worker cards and the workers in a one-to-one correspondence manner, and when the number of the workers is large, different workers can work by using the same worker card at different times. The invention can effectively reduce the manufacturing quantity of the card workers when the number of the workers increases, thereby effectively reducing the consumption of manufacturing resources and improving the utilization efficiency of the card workers.
The above description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood, and to make the above and other objects, features, and advantages of the present invention more apparent.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart illustrating a first data processing method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a third data processing method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a data processing system according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
As shown in fig. 1, the present embodiment proposes a first data processing method, which can be applied to a data processing system including: the system comprises an information management unit, a data analysis unit and a work card; the data processing method may include the steps of:
s101, the worker card obtains voice data to be processed and sends the voice data to be processed to an information management unit;
it should be noted that the worker cards of the present invention do not need to correspond to the workers one by one, i.e., do not need to bind the worker cards with the identity information of a certain worker.
The voice data to be processed may be voice data of a worker.
Specifically, the worker card can record the sound emitted by the worker, obtain the voice data to be processed, and upload the obtained voice data to be processed to the information management unit.
The information management unit can be a server or a cloud server. The present invention is not limited to a specific device type of the information management unit.
S102, an information management unit determines voice feature data extracted from voice data to be processed as target voice feature data;
specifically, after receiving the voice data to be processed, the information management unit may extract voice feature data from the voice data to be processed, and determine the extracted voice feature data as target voice feature data. It can be understood that the target voice feature data extracted by the information management unit may be voiceprint information of the staff.
S103, the information management unit finds out a target object identifier matched with the target voice characteristic data; the voice recognition method comprises the following steps that an information management unit stores at least one piece of voiceprint information, and each piece of voiceprint information correspondingly comprises voice feature data and an object identifier;
the target object identifier may be an identifier of an object matching the target voice feature data. It should be noted that the object may be a person or a device that emits sound, for example, the object may be a worker; as another example, the object may be an electronic device.
When the object is a person, the identification can be name, identity card number, post and other identity information; when the object is an electronic device, its identification may be a machine number or the like.
The information management means may be configured to previously arrange the correspondence between the plurality of different objects and the voice feature data thereof. Specifically, for any object, the information management unit may obtain an identifier of the object and a segment of voice data sent by the object, extract corresponding voice feature data from the segment of voice data, and store the extracted voice feature data in association with the identifier of the object. Of course, the information management unit may also directly obtain the identification of a certain object and its voice feature data input by other devices or units without extracting the voice feature data.
Specifically, after obtaining the target voice feature data, the information management unit may find out voiceprint information including the target voice feature data from each piece of stored voiceprint information, and determine an object identifier in the found voiceprint information as the target object identifier.
S104, the information management unit sends the data to be evaluated comprising the voice data to be processed and the target object identification to the data analysis unit;
specifically, after determining the target object identifier, the information management unit may send the data to be evaluated, which includes the voice data to be processed and the target object identifier, to the data analysis unit.
S105, the data analysis unit determines an evaluation score corresponding to the data to be evaluated according to a predefined evaluation mode.
Optionally, the scoring mode may be a mode of determining an emotion and an attitude type corresponding to the voice data to be processed in advance, and then determining a corresponding evaluation score according to the emotion and attitude type;
optionally, the scoring mode may be a mode of determining a text corresponding to the voice data to be processed in advance, and then determining a corresponding evaluation score according to the text. Of course, the scoring mode may also determine the mode of the corresponding evaluation score based on the emotion attitude and the text corresponding to the voice data to be processed.
Specifically, the data analysis unit may determine the corresponding evaluation score based on the data to be evaluated after obtaining the data to be evaluated.
It can be understood that the manager can evaluate and manage the staff according to the evaluation score determined by the data analysis, and the management efficiency is improved.
It should be noted that, by configuring the voice feature data and the person identifier of the worker in advance, when the voice data input by any worker is obtained, the corresponding person identifier can be found according to the voice data, and the voice data is scored, so as to determine the evaluation score corresponding to the voice data of the worker, that is, determine the evaluation score for the worker. It can be understood that the worker cards in the invention can realize the evaluation of the voice data of the workers without binding with the workers, and when the number of the workers is large, different workers can work by using the same worker card at different time. Therefore, the invention can effectively reduce the manufacturing quantity of the card workers when the number of the workers increases, thereby effectively reducing the consumption of manufacturing resources and improving the utilization efficiency of the card workers.
The data processing method provided by the invention can be applied to a data processing system, and the data processing system comprises the following steps: the system comprises an information management unit, a data analysis unit and a work card; the data processing method may include: the workmanship card obtains voice data to be processed and sends the voice data to be processed to the information management unit; the information management unit determines voice feature data extracted from the voice data to be processed as target voice feature data, searches a target object identifier matched with the target voice feature data, and sends the data to be evaluated comprising the voice data to be processed and the target object identifier to the data analysis unit; the voice recognition method comprises the following steps that an information management unit stores at least one piece of voiceprint information, and each piece of voiceprint information correspondingly comprises voice feature data and an object identifier; and the data analysis unit determines an evaluation score corresponding to the data to be evaluated according to a predefined evaluation mode. The method and the device can realize the evaluation of the voice data of the workers without binding the worker cards and the workers in a one-to-one correspondence manner, and when the number of the workers is large, different workers can work at different time by using the same worker card. The invention can effectively reduce the manufacturing quantity of the card workers when the number of the workers increases, thereby effectively reducing the consumption of manufacturing resources and improving the utilization efficiency of the card workers.
Based on fig. 1, the present invention proposes a second data processing method. In the method, when the voice data to be processed comprises the voice data input by a plurality of objects, the target voice characteristic data comprises the voice characteristic data of each object; at this time, step S103 may include:
the information management unit finds out target voiceprint information containing voice characteristic data of an object from all voiceprint information;
the information management unit determines an object identifier in the target voiceprint information as a target object identifier.
It should be noted that when the worker wears the card to communicate with the customer, the card may obtain voice data including the worker, the customer, and the surrounding personnel. At this time, the to-be-processed voice data obtained by the staff may include voice data of a plurality of objects. At this time, the target voice feature data extracted from the voice data to be processed by the information management unit may include voice feature data of a plurality of objects.
The target voiceprint information may be voiceprint information including voice feature data of an object in the target voice feature data. It can be understood that the target voiceprint information is the voiceprint information of the worker.
Specifically, the information management unit may search for the target voiceprint information in each saved voiceprint information after the target voice feature data is extracted. Then, the information management unit may determine the object identifier in the target voiceprint information as the target object identifier. It is understood that the target object identification may be a person identification of the staff member.
Optionally, in the second data processing method, the step S104 may include:
the information management unit extracts target voice data matched with the voice feature data in the target voiceprint information from the voice data to be processed;
and the information management unit determines the target voice data and the target object identification as data to be evaluated and sends the data to the data analysis unit.
It should be noted that the voice feature data in the target voice feature data and the voice data in the voice data to be processed may be in one-to-one correspondence. Therefore, the voice data of the staff can be determined from the voice data to be processed based on the voice feature data in the target voiceprint information determined from the target voice feature data, namely the voice feature data of the staff.
The target voice data can be the voice data of the staff in the voice data to be processed.
Specifically, after the target voice data is determined, the information management unit may send the target voice data and the target object identifier to the data analysis unit, and the data analysis unit performs corresponding scoring.
It should be noted that, in the second data processing method, the voice data of the staff can be extracted and analyzed to obtain the corresponding evaluation score when the staff card obtains the voice data containing the staff and other objects, so that the service evaluation of the staff is realized.
It can be understood that, when the to-be-processed voice data obtained by the work card includes voice data of a plurality of workers and other objects, the target voice feature data extracted from the to-be-processed voice data by the present invention may include voice feature data of a plurality of workers and other objects, at this time, the present invention may find corresponding voiceprint information from the stored voiceprint information according to each voice feature data, and at this time, the target voiceprint information may include voiceprint information of a plurality of workers. Then, the information management unit may determine the object identifier of each voiceprint information in the target voiceprint information as the target object identifier, determine the voice data corresponding to each voiceprint information in the target voiceprint information from the voice data to be processed, determine each determined voice data as the target voice data, and send the target voice data and the target object identifier to the data analysis unit. At this time, the data analysis unit may evaluate the voice data corresponding to each object identifier in the target object identifiers, respectively, so as to evaluate the voice data of each worker.
It can be understood that, when the voice data to be processed includes voice data of a plurality of objects, the method and the device can also evaluate the voice data of the staff, and can further ensure the realization of the evaluation of the voice data.
The data processing method provided by the invention can further ensure the realization of the voice data evaluation.
Based on fig. 1, the present embodiment proposes a third data processing method as shown in fig. 2. In the method, the data analysis unit comprises: the system comprises a voice recognition unit, a keyword matching unit, a dialect matching unit and a scoring unit; at this time, step S105 may include the steps of:
s201, a voice recognition unit converts voice data to be processed into corresponding texts;
specifically, the speech recognition unit may convert the speech data to be processed by using a speech recognition technology to obtain a corresponding text.
It should be noted that, after the text is obtained, the data of the text in multiple dimensions can be obtained, and then the final evaluation is performed based on the data in multiple dimensions.
S202, extracting predefined keywords from the text by a keyword matching unit;
the keywords can be sales vocabularies, forbidden words and the like. It should be noted that the keyword may be set by a technician according to actual situations, and the present invention is not limited to this.
S203, the speech matching unit performs similarity matching on the text and a predefined speech text to obtain a first score;
it is understood that the higher the similarity match between the text and the spoken text, the higher the score of the first score may be.
S204, the scoring unit determines an evaluation score based on the keyword and the first score.
Specifically, the scoring unit may perform a composite score based on the keyword and the first score.
It should be noted that, in the scoring unit, corresponding scores and/or weights may be set for different dimensions, so that when the scoring unit obtains data of different dimensions, scores corresponding to different dimensions may be determined, and then weighted summation is performed to obtain a final evaluation score.
Specifically, the invention can obtain the related data on a plurality of dimensions from the converted text, and then comprehensively evaluate the voice data by using the related data on the plurality of dimensions, thereby effectively enriching the evaluation dimensions and improving the evaluation accuracy.
According to the data processing method provided by the embodiment, the related data in multiple dimensions can be obtained from the converted text, and then the related data in multiple dimensions is utilized to perform comprehensive evaluation on the voice data, so that the evaluation dimensions can be effectively enriched, and the evaluation accuracy is improved.
Based on fig. 1, the present embodiment proposes a fourth data processing method. In the method, the card comprises: a recording unit and a communication unit; at this time, the card obtains the voice data to be processed, and sends the voice data to be processed to the information management unit, including:
the recording unit obtains voice data to be processed and sends the voice data to the communication unit;
the communication unit sends the voice data to be processed to the information management unit.
Optionally, the communication unit may be a wifi module. Corresponding wifi names and passwords can be stored in the work card in advance, networking is carried out through wifi, and communication with the information management unit is achieved.
Optionally, the card may further include: the device comprises an indicator light, a rechargeable battery, a USB interface, a display screen and a storage unit;
the indicator light can be used for displaying the state of the work card, including whether the work card is in failure, whether networking is performed, whether recording is performed and the like.
The USB interface can be used for charging the work card and reading and writing data.
Wherein the storage unit can be used for temporarily storing the recorded sound data and necessary system occupation.
Optionally, the method may further include:
the communication unit obtains the state data of the work card and sends the state data to the information management unit.
The state data may include an operating state, power information, storage occupancy information, and the like.
Optionally, the method may further include:
the worker card obtaining information management unit sends a worker card control command to the worker card obtaining information management unit, and corresponding control is carried out based on the worker card control command.
Optionally, the card control command may include a recording start command, a recording end command, and the like.
It should be noted that, compared with the prior art, the worker card of the invention has more abundant functions, effectively ensures the realization of voice data evaluation, and enhances the practicability of the worker card.
The data processing method provided by the embodiment can effectively guarantee the realization of voice data evaluation and enhance the practicability of the card.
Based on fig. 1, the present embodiment proposes a fourth data processing method. In the method, the data processing system may further include a background management unit, and a technician may configure information in the background management unit, and the background management unit may update the configured information to the information management unit synchronously.
Specifically, the invention can add management personnel in the background management unit by technical personnel. Only when the identity authority of the manager passes through the background management unit is authenticated, the background management unit receives configuration information and editing instructions input by the manager, such as information adding, deleting and modifying operations on workers (such as a professional consultant); as another example, the addition and deletion of voiceprint information.
It should be noted that, by setting the background management unit, the invention can effectively implement control over the relevant information in the information management unit, further ensure the implementation of voice data evaluation, and improve data processing efficiency.
The data processing method provided by the embodiment can effectively realize control of relevant information in the information management unit through the arrangement of the background management unit, further guarantee the realization of voice data evaluation, and improve the data processing efficiency.
Corresponding to the method shown in fig. 1, the present embodiment provides a data processing system as shown in fig. 3. The data processing system includes: an information management unit 101, a data analysis unit 102, and a work card 103; wherein:
the worker card 103 obtains voice data to be processed and sends the voice data to be processed to the information management unit 101;
the information management unit 101 determines voice feature data extracted from the voice data to be processed as target voice feature data, finds a target object identifier matched with the target voice feature data, and sends to-be-evaluated data including the voice data to be processed and the target object identifier to the data analysis unit 102; the information management unit 101 stores at least one piece of voiceprint information, and each piece of voiceprint information correspondingly includes voice feature data and an object identifier;
the data analysis unit 102 determines an evaluation score corresponding to the data to be evaluated according to a predefined scoring mode.
It should be noted that, for specific processing procedures of the work card 103, the information management unit 101, and the data analysis unit 102 and technical effects brought by the processing procedures, reference may be made to steps S101, S102, S103, S104, and S105 in fig. 1, and details of related descriptions are not repeated.
Optionally, when the to-be-processed voice data includes voice data input by a plurality of objects, the target voice feature data includes voice feature data of each object; the target object identification matched with the target voice characteristic data is found out and is set as follows:
the information management unit 101 finds target voiceprint information containing voice feature data of one object from each piece of voiceprint information;
the information management unit 101 determines an object identifier in the target voiceprint information as the target object identifier.
Optionally, the to-be-evaluated data including the to-be-processed voice data and the target object identifier is sent to the data analysis unit, and the data analysis unit is configured to:
the information management unit 101 extracts target voice data matched with the voice feature data in the target voiceprint information from the voice data to be processed;
the information management unit 101 determines the target voice data and the target object identifier as the data to be evaluated, and sends the data to the data analysis unit 102.
Optionally, the data analysis unit 102 includes: the system comprises a voice recognition unit, a keyword matching unit, a dialect matching unit and a scoring unit; the data analysis unit determines an evaluation score corresponding to the data to be evaluated according to a predefined evaluation mode, and the evaluation score is set as:
the voice recognition unit converts the voice data to be processed into corresponding texts;
the keyword matching unit extracts predefined keywords from the text;
the dialect matching unit is used for carrying out similarity matching on the text and a predefined dialect text to obtain a first score;
the scoring unit determines the evaluation score based on the keyword and the first score.
Optionally, the work card 103 includes: a recording unit and a communication unit; the card 103 obtains voice data to be processed, and sends the voice data to be processed to the information management unit, and the setting is as follows:
the recording unit obtains the voice data to be processed and sends the voice data to the communication unit;
and the communication unit sends the voice data to be processed to the information management unit.
Optionally, the communication unit obtains the status data of the work card and sends the status data to the information management unit.
Optionally, the work card obtains the work card control instruction issued by the information management unit, and performs corresponding control based on the work card control instruction.
The data processing system provided by the embodiment can comprise an information management unit 101, a data analysis unit 102 and a work card 103; the worker card 103 obtains voice data to be processed and sends the voice data to be processed to the information management unit 101; the information management unit 101 determines voice feature data extracted from the voice data to be processed as target voice feature data, finds a target object identifier matched with the target voice feature data, and sends the data to be evaluated including the voice data to be processed and the target object identifier to the data analysis unit 102; the information management unit 101 stores at least one piece of voiceprint information, and each piece of voiceprint information correspondingly comprises voice feature data and an object identifier; the data analysis unit 102 determines an evaluation score corresponding to the data to be evaluated according to a predefined scoring mode. According to the invention, the voice data of the workers can be evaluated without binding the worker cards 103 with the workers in a one-to-one correspondence manner, and when the number of the workers is large, different workers can work at different time by using the same worker card. The invention can effectively reduce the manufacturing quantity of the card 103 when the number of workers increases, thereby effectively reducing the consumption of manufacturing resources and improving the utilization efficiency of the card.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in the process, method, article, or apparatus that comprises the element.
The above are merely examples of the present application and are not intended to limit the present application. Various modifications and changes may occur to those skilled in the art to which the present application pertains. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (10)

1. A data processing method is applied to a data processing system, and the data processing system comprises: the system comprises an information management unit, a data analysis unit and a work card; the data processing method comprises the following steps:
the workmanship board obtains voice data to be processed and sends the voice data to be processed to the information management unit;
the information management unit determines voice feature data extracted from the voice data to be processed as target voice feature data, searches for a target object identifier matched with the target voice feature data, and sends to-be-evaluated data comprising the voice data to be processed and the target object identifier to the data analysis unit; the information management unit stores at least one piece of voiceprint information, and each piece of voiceprint information correspondingly comprises voice feature data and an object identifier;
and the data analysis unit determines an evaluation score corresponding to the data to be evaluated according to a predefined evaluation mode.
2. The data processing method according to claim 1, wherein when the voice data input by a plurality of objects is included in the voice data to be processed, the voice feature data of each object is included in the target voice feature data; the finding out the target object identifier matched with the target voice characteristic data comprises:
the information management unit finds out target voiceprint information containing voice characteristic data of the object from each voiceprint information;
the information management unit determines an object identifier in the target voiceprint information as the target object identifier.
3. The data processing method according to claim 2, wherein the sending the data to be evaluated including the voice data to be processed and the target object identifier to the data analysis unit includes:
the information management unit extracts target voice data matched with the voice feature data in the target voiceprint information from the voice data to be processed;
and the information management unit determines the target voice data and the target object identification as the data to be evaluated and sends the data to the data analysis unit.
4. The data processing method of claim 1, wherein the data analysis unit comprises: the system comprises a voice recognition unit, a keyword matching unit, a dialect matching unit and a scoring unit; the data analysis unit determines an evaluation score corresponding to the data to be evaluated according to a predefined scoring mode, and the evaluation score comprises the following steps:
the voice recognition unit converts the voice data to be processed into corresponding texts;
the keyword matching unit extracts predefined keywords from the text;
the dialect matching unit is used for carrying out similarity matching on the text and a predefined dialect text to obtain a first score;
the scoring unit determines the evaluation score based on the keyword and the first score.
5. The data processing method of claim 1, wherein the workcard comprises: a recording unit and a communication unit; the worker card obtains voice data to be processed, and sends the voice data to be processed to the information management unit, and the method comprises the following steps:
the recording unit obtains the voice data to be processed and sends the voice data to the communication unit;
and the communication unit sends the voice data to be processed to the information management unit.
6. The data processing method of claim 5, further comprising:
the communication unit obtains the state data of the work card and sends the state data to the information management unit.
7. The data processing method of claim 1, further comprising:
the work card obtains the work card control command issued by the information management unit, and corresponding control is carried out based on the work card control command.
8. A data processing system, comprising: the system comprises an information management unit, a data analysis unit and a work card; wherein:
the workmanship board obtains voice data to be processed and sends the voice data to be processed to the information management unit;
the information management unit determines voice feature data extracted from the voice data to be processed as target voice feature data, finds out a target object identifier matched with the target voice feature data, and sends the data to be evaluated comprising the voice data to be processed and the target object identifier to the data analysis unit; the information management unit stores at least one piece of voiceprint information, and each piece of voiceprint information correspondingly comprises voice feature data and an object identifier;
and the data analysis unit determines an evaluation score corresponding to the data to be evaluated according to a predefined evaluation mode.
9. The data processing system according to claim 8, wherein when the voice data to be processed includes voice data input by a plurality of objects, the voice feature data of each of the objects is included in the target voice feature data; the target object identification matched with the target voice characteristic data is found out and is set as follows:
the information management unit finds out target voiceprint information containing voice characteristic data of the object from each voiceprint information;
the information management unit determines an object identifier in the target voiceprint information as the target object identifier.
10. The data processing system of claim 9, wherein the sending of the data to be evaluated including the voice data to be processed and the target object identifier to the data analysis unit is configured to:
the information management unit extracts target voice data matched with the voice feature data in the target voiceprint information from the voice data to be processed;
and the information management unit determines the target voice data and the target object identification as the data to be evaluated and sends the data to the data analysis unit.
CN202210709607.1A 2022-06-22 2022-06-22 Data processing method and system Pending CN114783420A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210709607.1A CN114783420A (en) 2022-06-22 2022-06-22 Data processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210709607.1A CN114783420A (en) 2022-06-22 2022-06-22 Data processing method and system

Publications (1)

Publication Number Publication Date
CN114783420A true CN114783420A (en) 2022-07-22

Family

ID=82422413

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210709607.1A Pending CN114783420A (en) 2022-06-22 2022-06-22 Data processing method and system

Country Status (1)

Country Link
CN (1) CN114783420A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107204195A (en) * 2017-05-19 2017-09-26 四川新网银行股份有限公司 A kind of intelligent quality detecting method analyzed based on mood
CN109151218A (en) * 2018-08-21 2019-01-04 平安科技(深圳)有限公司 Call voice quality detecting method, device, computer equipment and storage medium
CN111199158A (en) * 2019-12-30 2020-05-26 沈阳民航东北凯亚有限公司 Method and device for scoring civil aviation customer service
CN111223487A (en) * 2019-12-31 2020-06-02 联想(北京)有限公司 Information processing method and electronic equipment
CN111311327A (en) * 2020-02-19 2020-06-19 平安科技(深圳)有限公司 Service evaluation method, device, equipment and storage medium based on artificial intelligence
CN113257253A (en) * 2021-06-29 2021-08-13 明品云(北京)数据科技有限公司 Text extraction method, system, device and medium
CN113554334A (en) * 2021-08-02 2021-10-26 上海明略人工智能(集团)有限公司 Method, system, device, server and storage medium for evaluating user recording behaviors
CN113571068A (en) * 2021-07-27 2021-10-29 上海明略人工智能(集团)有限公司 Method and device for voice data encryption, electronic equipment and readable storage medium
CN114157758A (en) * 2021-09-25 2022-03-08 南方电网数字电网研究院有限公司 Voice intelligent customer service character quality inspection method and system
CN114420130A (en) * 2022-01-26 2022-04-29 广东电网有限责任公司 Telephone voice interaction method, device, equipment and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107204195A (en) * 2017-05-19 2017-09-26 四川新网银行股份有限公司 A kind of intelligent quality detecting method analyzed based on mood
CN109151218A (en) * 2018-08-21 2019-01-04 平安科技(深圳)有限公司 Call voice quality detecting method, device, computer equipment and storage medium
CN111199158A (en) * 2019-12-30 2020-05-26 沈阳民航东北凯亚有限公司 Method and device for scoring civil aviation customer service
CN111223487A (en) * 2019-12-31 2020-06-02 联想(北京)有限公司 Information processing method and electronic equipment
CN111311327A (en) * 2020-02-19 2020-06-19 平安科技(深圳)有限公司 Service evaluation method, device, equipment and storage medium based on artificial intelligence
CN113257253A (en) * 2021-06-29 2021-08-13 明品云(北京)数据科技有限公司 Text extraction method, system, device and medium
CN113571068A (en) * 2021-07-27 2021-10-29 上海明略人工智能(集团)有限公司 Method and device for voice data encryption, electronic equipment and readable storage medium
CN113554334A (en) * 2021-08-02 2021-10-26 上海明略人工智能(集团)有限公司 Method, system, device, server and storage medium for evaluating user recording behaviors
CN114157758A (en) * 2021-09-25 2022-03-08 南方电网数字电网研究院有限公司 Voice intelligent customer service character quality inspection method and system
CN114420130A (en) * 2022-01-26 2022-04-29 广东电网有限责任公司 Telephone voice interaction method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107733780B (en) Intelligent task allocation method and device and instant messaging tool
CN108447471A (en) Audio recognition method and speech recognition equipment
JPWO2011093025A1 (en) Input support system, method, and program
CN107733782A (en) The method, apparatus and system of group is generated according to task
US20210158302A1 (en) System and method of authenticating candidates for job positions
CN109831677B (en) Video desensitization method, device, computer equipment and storage medium
CN103365834A (en) System and method for eliminating language ambiguity
CN113051362A (en) Data query method and device and server
CN112235470B (en) Incoming call client follow-up method, device and equipment based on voice recognition
JP2018045639A (en) Dialog log analyzer, dialog log analysis method, and program
CN112836521A (en) Question-answer matching method and device, computer equipment and storage medium
CN114911929A (en) Classification model training method, text mining equipment and storage medium
WO2022267322A1 (en) Method and apparatus for generating meeting summary, and terminal device and computer storage medium
CN111128179A (en) Intelligent supervision method and system based on voice recognition technology
CN110428816A (en) A kind of method and device voice cell bank training and shared
CN113055278A (en) Mail filing processing method and device
CN114783420A (en) Data processing method and system
CN109145092A (en) A kind of database update, intelligent answer management method, device and its equipment
CN114528851B (en) Reply sentence determination method, reply sentence determination device, electronic equipment and storage medium
CN113609833B (en) Dynamic file generation method and device, computer equipment and storage medium
CN114782101A (en) Customer transaction probability analysis method, system and equipment based on voice recognition
CN113111157B (en) Question-answer processing method, device, computer equipment and storage medium
CN106371905B (en) Application program operation method and device and server
CN112269473B (en) Man-machine interaction method and system based on flexible scene definition
CN114443632A (en) Intelligent conversion method and system for credit of credit bank and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20220722