CN114819474A - Physician evaluation method and device, electronic equipment and storage medium - Google Patents

Physician evaluation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114819474A
CN114819474A CN202210222588.XA CN202210222588A CN114819474A CN 114819474 A CN114819474 A CN 114819474A CN 202210222588 A CN202210222588 A CN 202210222588A CN 114819474 A CN114819474 A CN 114819474A
Authority
CN
China
Prior art keywords
evaluation
physician
text
video segment
pet
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210222588.XA
Other languages
Chinese (zh)
Inventor
彭永鹤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New Ruipeng Pet Healthcare Group Co Ltd
Original Assignee
New Ruipeng Pet Healthcare Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New Ruipeng Pet Healthcare Group Co Ltd filed Critical New Ruipeng Pet Healthcare Group Co Ltd
Priority to CN202210222588.XA priority Critical patent/CN114819474A/en
Publication of CN114819474A publication Critical patent/CN114819474A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06393Score-carding, benchmarking or key performance indicator [KPI] analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Human Resources & Organizations (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Health & Medical Sciences (AREA)
  • Educational Administration (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Tourism & Hospitality (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Development Economics (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Marketing (AREA)
  • Educational Technology (AREA)
  • General Business, Economics & Management (AREA)
  • Quality & Reliability (AREA)
  • Data Mining & Analysis (AREA)
  • Operations Research (AREA)
  • Primary Health Care (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Game Theory and Decision Science (AREA)
  • Databases & Information Systems (AREA)
  • Acoustics & Sound (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The embodiment of the application discloses a physician evaluation method, a physician evaluation device, electronic equipment and a storage medium. The method comprises the following steps: acquiring an evaluation video of a pet physician; determining a first video segment and a second video segment from the evaluation video, wherein the first video segment is a video segment for theoretical evaluation of a pet physician, and the second video segment is a video segment for clinical operation evaluation of the pet physician; carrying out voice analysis on the first video segment to obtain a first evaluation score; extracting the second video segment to obtain multiple frames of images to be identified, wherein the multiple frames of images to be identified record the operation process of one clinical evaluation item of a pet physician; performing action analysis on multiple frames of images to be identified to obtain a second evaluation score; according to the first evaluation score and the second evaluation score, the evaluation result of the pet physician is determined, so that the evaluation process is simplified, the resources are saved, and the experience of the user is improved.

Description

Physician evaluation method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of video processing, in particular to a physician evaluation method, a physician evaluation device, electronic equipment and a storage medium.
Background
At present, examinations of pet doctors are all performed offline, and the pet doctors need to go to a designated place to perform evaluation, so that a large amount of manpower and material resources are required to be invested, the evaluation efficiency is low, and the evaluation experience of users is poor.
Disclosure of Invention
The embodiment of the application provides a physician evaluation method and device, electronic equipment and a storage medium, and the evaluation process is simplified and the evaluation experience of a user is improved through video evaluation.
In a first aspect, the present application provides a physician evaluation method, including:
acquiring an evaluation video of a pet physician;
determining a first video segment and a second video segment from the evaluation video, wherein the first video segment is a video segment for theoretical evaluation of a pet physician, and the second video segment is a video segment for clinical operation evaluation of the pet physician;
carrying out voice analysis on the first video segment to obtain a first evaluation score;
extracting the second video segment to obtain multiple frames of images to be identified, wherein the multiple frames of images to be identified record the operation process of one clinical evaluation item of a pet physician;
performing action analysis on multiple frames of images to be identified to obtain a second evaluation score;
and determining the evaluation result of the pet physician according to the first evaluation score and the second evaluation score.
In a second aspect, the present application provides a physician evaluation apparatus, including: an acquisition unit and a processing unit;
the acquisition unit is used for acquiring an evaluation video of a pet doctor;
the pet evaluation system comprises a processing unit, a judging unit and a judging unit, wherein the processing unit is used for determining a first video segment and a second video segment from an evaluation video, the first video segment is a video segment of theoretical evaluation of a pet physician, and the second video segment is a video segment of clinical operation evaluation of the pet physician;
carrying out voice analysis on the first video segment to obtain a first evaluation score;
extracting the second video segment to obtain multiple frames of images to be identified, wherein the multiple frames of images to be identified record the operation process of one clinical evaluation item of a pet physician;
performing action analysis on multiple frames of images to be identified to obtain a second evaluation score;
and determining the evaluation result of the pet physician according to the first evaluation score and the second evaluation score.
In a third aspect, an embodiment of the present application provides an electronic device, including: a processor coupled to the memory, the memory for storing a computer program, the processor for executing the computer program stored in the memory to cause the electronic device to perform the method of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, where a computer program is stored, and the computer program causes a computer to execute the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program, the computer operable to cause the computer to perform the method of the first aspect.
The embodiment of the application has the following beneficial effects:
it can be seen that in the embodiment of the application, by acquiring the evaluation video of the pet physician; determining a first video segment and a second video segment from the evaluation video, wherein the first video segment is a video segment for theoretical evaluation of a pet physician, and the second video segment is a video segment for clinical operation evaluation of the pet physician; then carrying out voice analysis on the first video segment to obtain a first evaluation score; extracting the second video segment to obtain multiple frames of images to be identified, wherein the multiple frames of images to be identified record the operation process of one clinical evaluation item of a pet physician; then, performing action analysis on multiple frames of images to be identified to obtain a second evaluation score; and finally, obtaining the evaluation result of the pet physician according to the first evaluation score and the second evaluation score, and processing the evaluation video of the pet physician to obtain the corresponding evaluation result in the video-based evaluation mode, so that the evaluation process is simplified, the resources are saved, and the experience of the user is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of a physician evaluation system provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a physician evaluation method provided in an embodiment of the present application;
fig. 3 is a flowchart illustrating a method for obtaining similarity according to first text information and a first standard answer text according to an embodiment of the present disclosure;
FIG. 4a is a schematic diagram of a first candidate image according to an embodiment of the present disclosure;
fig. 4b is a schematic diagram of an outline image of an operation portion according to an embodiment of the present application;
FIG. 4c is a schematic diagram of a first location point according to an embodiment of the present disclosure;
FIG. 5 is a block diagram illustrating functional units of a physician evaluation apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, result, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Referring to fig. 1, fig. 1 is a schematic diagram of a physician evaluation system according to an embodiment of the present application. The physician evaluation system comprises a pet hospital terminal 10 and a physician evaluation device 20.
Illustratively, the physician evaluation device 20 may obtain a pet physician evaluation video from the pet hospital end 10, where the evaluation video includes a theoretical evaluation stage and a clinical operation evaluation stage of the pet physician, and the evaluation video may be shot by the pet physician and then uploaded to the pet hospital end 10, or may be collected by the pet hospital using a unified camera device, which is not limited herein.
Correspondingly, the physician evaluation apparatus 20 performs segmentation processing on the evaluation video to obtain a first video segment and a second video segment, wherein the first video segment is used for theoretical evaluation, that is, the first video segment is formed by a theoretical evaluation phase of a pet physician, and the second video segment is used for clinical operation evaluation, that is, the second video segment is formed by a clinical operation evaluation phase of the pet physician.
Then, the physician evaluation device 20 performs voice analysis on the first video segment to obtain a first evaluation score; the physician evaluation device 20 extracts the second video segment to obtain multiple frames of images to be identified, wherein the multiple frames of images to be identified record the operation process of one clinical evaluation item of the pet physician. It should be noted that, in the present application, performing speech analysis on the first video segment and extracting the second video segment may be performed simultaneously, or performing a speech analysis process on the first video segment first and then performing an extraction process on the second video segment, or performing a speech analysis process on the first video segment first and then performing a speech analysis process on the first video segment, and the execution sequence is not limited in the present application. Then, performing action analysis on multiple frames of images to be identified to obtain a second evaluation score; and finally, determining the evaluation result of the pet physician according to the first evaluation score and the second evaluation score.
It can be seen that in the embodiment of the application, by acquiring the evaluation video of the pet physician; determining a first video segment and a second video segment from the evaluation video, wherein the first video segment is a video segment for theoretical evaluation of a pet physician, and the second video segment is a video segment for clinical operation evaluation of the pet physician; then carrying out voice analysis on the first video segment to obtain a first evaluation score; extracting the second video segment to obtain multiple frames of images to be identified, wherein the multiple frames of images to be identified record the operation process of one clinical evaluation item of a pet physician; then, performing action analysis on multiple frames of images to be identified to obtain a second evaluation score; and finally, obtaining the evaluation result of the pet physician according to the first evaluation score and the second evaluation score, and processing the evaluation video of the pet physician to obtain the corresponding evaluation result in the video-based evaluation mode, so that the evaluation process is simplified, the resources are saved, and the experience of the user is improved.
Referring to fig. 2, fig. 2 is a schematic flowchart of a physician evaluation method according to an embodiment of the present application. The method is applied to the above-described physician evaluation apparatus 20. The method includes, but is not limited to, steps 201-206:
201: and obtaining an evaluation video of a pet doctor.
The evaluation video is obtained by shooting a theoretical evaluation stage and a clinical operation evaluation stage of a pet physician. Thus, the assessment video includes a theoretical assessment phase and a clinical practice assessment phase for the pet physician.
202: and determining a first video segment and a second video segment from the evaluation video.
The first video segment is used for theoretical evaluation, namely the first video segment is obtained by shooting a theoretical evaluation stage of a pet physician, and the second video segment is used for clinical operation evaluation, namely the second video segment is obtained by shooting a clinical operation evaluation stage of the pet physician.
203: and carrying out voice analysis on the first video segment to obtain a first evaluation score.
In an embodiment of the present application, performing voice analysis on the first video segment to obtain a first evaluation score specifically includes: performing voice recognition on the first video segment to obtain voice stream data; dividing voice stream data according to preset answering time of each question in the theoretical evaluation to obtain at least one sub-voice stream, wherein each sub-voice stream in the at least one sub-voice stream corresponds to one question in the theoretical evaluation; converting at least one sub voice stream to obtain at least one text message, wherein the at least one text message corresponds to the at least one sub voice stream one to one; obtaining similarity according to the first text information and a first standard answer text, wherein the first text information is any one of at least one text information, and the first standard answer text is a correct answer of a question corresponding to a sub voice stream corresponding to the first text information; and determining a first evaluation score according to the similarity corresponding to the first text information, namely determining the first evaluation score of the pet physician for the question.
Illustratively, uploading a first video segment to a speech recognition model, extracting a speech data stream from the first video segment, where the speech data stream may also be understood as audio data. And then, according to the preset answering time of each question in the theoretical evaluation, segmenting the voice data stream to obtain at least one sub-voice stream.
For example, suppose that 15 questions are set in the theoretical examination, the 15 questions are sequentially 7 choices, 5 judgments, 2 simple answers, and 1 case analysis question, and then an answer time is preset for each question in each question type, where the answer time may include a time for playing the question, for example, the answer time for the choice question may be set to 2 minutes, the answer time for the judgment question may be set to 2 minutes, the answer time for the simple answer may be set to 10 minutes, the answer time for the case analysis question may be set to 15 minutes, and then the answer time for the 15 questions may be summarized as: [2,2,2,2,2,2,2, 10,10,15], then the voice data stream is divided according to the total set of answering times to obtain 15 sub voice streams.
Further, the 15 sub-voice streams are converted to obtain 15 text messages, where an example of any one of the 15 sub-voice streams is described here, for example, the first sub-voice stream is used as an example, and the first sub-voice stream is converted to obtain text messages, where the natural language processing technology may be used to identify the first sub-voice stream to obtain text messages. And then according to a method for converting the first sub voice stream to obtain text information, performing text conversion on the 15 sub voice streams to obtain 15 text information, wherein the 15 text information and the 15 sub voice streams are in one-to-one correspondence.
After obtaining the text information, in the embodiment of the present application, a method for obtaining similarity according to the first text information and the first standard answer text is provided, as shown in fig. 3, the method includes steps 301-308:
301: and analyzing the first text information to obtain a first question text and a first answering text.
Wherein the first text information is any one of at least one text information, and the first answering text is the answering content of the pet physician for the subject. Still continuing with the example of the 15 pieces of text information, that is, the first text information here is any one of the 15 pieces of text information, analyzing the first text information to obtain a first topic text and a first answering text, and if the topic corresponding to the first text information is the choice topic 2, the first answering text is the answer content of the pet physician for the choice topic 2; in addition, since the preset answer time includes the time taken to play the question, the time value taken to play the question may be preset, for example, the answer time for the selected question is preset to 2 minutes, the time taken to play the selected question is preset to 1 minute, and then the first text information is analyzed, and the text before the end time of playing the selected question is taken as the played question text, that is, the first question text.
302: determining whether the type of the first topic text is a subjective topic or an objective topic.
Illustratively, step 303 is performed when the first topic text is an objective topic, and step 304 is performed when the first topic text is a subjective topic.
For example, the type of the first topic text is determined, that is, whether the topic type corresponding to the first topic text is an objective question or a subjective question, still following the above example, the objective question may include the above 7-channel selection question and 5-channel determination question, and the subjective question may include the above 2-channel simple answer question and 1-channel case analysis question.
303: if the first answer text is the same as the first standard answer text, determining that the similarity is 1; and if the first answer text is not the same as the first standard answer text, determining that the similarity is 0.
The first standard answer text is a correct answer to a question corresponding to the sub-voice stream corresponding to the first text information, for example, if the question corresponding to the first text information is a choice question 2, the first standard answer text is a correct answer to the choice question 2.
Illustratively, when the first topic text is an objective topic, for example, when the topic corresponding to the first topic text is the choice topic 1, identifying that the obtained first answer text is [ a ], obtaining a correct answer of the choice topic 1 at this time, and if the correct answer of the choice topic 1 at this time is [ a ], that is, the first standard answer text is [ a ], determining that the similarity is 1; if the correct answer of the 1 st choice question is not [ a ], the similarity is determined to be 0.
304: and extracting keywords from the first answer text to obtain a plurality of semantic keywords and a plurality of professional term keywords.
In the embodiment of the application, the keyword extraction may be performed on the first answer text based on a TF-IDF algorithm and a TextRank algorithm in a jieba word segmentation system, which is not limited in the application.
Illustratively, when the first subject text is subjective, such as the first subject text corresponding to the title 1, the first answer text is identified as "first, the pancreas has both exocrine and endocrine functions/influences on digestive and metabolic functions due to specific/complex pancreatic functions/diseases/pancreatitis is one of digestive system diseases especially common in dogs and cats/mainly divided into two types of acute and chronic types/then/common causes of acute pancreatitis/overeating/ingesting too much greasy food/spoiling food/changing food/drug action suddenly/clinical symptoms of acute pancreatitis mainly manifested as/anorexia/epigastric pain/nausea/vomiting/etc./treatment of acute pancreatitis can be/support therapy/inhibition of exocrine pancreatic/improvement of pancreatic secretion/pancreatic pancreatitis improvement And (4) glandular blood circulation ", and then extracting keywords from the first answer text to obtain a plurality of semantic keywords: "pancreas/function/disease/acute/chronic/cause/clinical/symptom/treatment/measure", a number of terms of expertise keywords: "acute pancreatitis/binge eating/drug action/anorexia/nausea/vomiting/supportive therapy/inhibition of pancreatic exocrine/improvement of pancreatic blood circulation".
305: and extracting keywords from the first standard answer text to obtain a standard semantic keyword library and a standard professional term keyword library.
The standard semantic keyword library is composed of a plurality of standard semantic keywords in a first standard answer text, and the standard professional term keyword library is composed of a plurality of standard professional term keywords in the first standard answer text.
Exemplarily, according to a method for extracting keywords from a first answer text to obtain a plurality of semantic keywords and a plurality of professional term keywords, extracting keywords from a first standard answer text to obtain a standard semantic keyword library composed of a plurality of standard semantic keywords: "pancreas/acute/cause/symptom/diagnosis/treatment/measure" and a term keyword library consisting of a plurality of standard term keywords: "overeating/eating too much greasy food/frozen food/spoiled food/biliary tract disease/surgery/drug action/metabolic disorder/anorexia/loss of appetite/epigastric pain/tense abdominal wall/sensitivity/nausea/vomiting/gastrointestinal bleeding/jaundice/enteroparalysis/intestinal infarction/peritonitis/blood test/enzyme assay/imaging test/supportive therapy/inhibition of exocrine pancreas/inhibition of pancreatic enzyme activity/improvement of pancreatic blood circulation/control of infection".
306: and screening the plurality of semantic keywords in a standard semantic keyword library to obtain at least one candidate semantic keyword.
In the embodiment of the application, each semantic keyword in the plurality of semantic keywords is screened in the standard semantic keyword library to obtain at least one candidate semantic keyword, or if the semantic keyword can be found in the standard semantic keyword library, the semantic keyword is determined as the candidate semantic keyword, for example, the obtained candidate semantic keyword has "pancreas/acute/etiology/symptom/treatment/measure", that is, 6 candidate semantic keywords.
307: and screening the plurality of professional term keywords in a standard professional term keyword library to obtain at least one candidate professional term keyword.
Illustratively, each term keyword of the plurality of term keywords is screened in the standard semantic keyword library to obtain at least one candidate term keyword, or if the term keyword can be found in the standard semantic keyword library, the term keyword is determined as a candidate term keyword, for example, the obtained candidate term keywords have "acute pancreatitis/binge eating/drinking/medicinal action/anorexia/nausea/vomiting/supportive therapy/suppressing pancreatic exocrine secretion/improving pancreatic blood circulation", i.e., there are 9 candidate term keywords.
308: a first ratio of the number of candidate semantic keywords to the number of standard semantic keywords and a second ratio of the number of candidate term keywords to the number of standard term keywords are obtained.
309: and determining the similarity according to the first ratio and the second ratio.
Wherein, the similarity can be obtained by formula (1):
Figure BDA0003534231590000061
wherein, K is the similarity, M is the first ratio, N is the second ratio, P is the first weight coefficient, and Q is the second weight coefficient.
It should be noted that the value interval of the similarity obtained according to the formula (1) is [0,1], and when the number of the candidate semantic keywords is more, the first ratio is larger; when the number of the candidate professional term keywords is larger, the second ratio is larger, the answer content is closer to the standard answer, and the value of the similarity is larger.
Based on this, in the embodiment of the present application, there is provided a method for determining a first evaluation score according to a similarity corresponding to first text information, the method including steps S1-S3:
s1: and acquiring a first topic score corresponding to the similarity.
Illustratively, if the topic corresponding to the first topic text is the 1 st topic of the choice topic, obtaining a first topic score of the 1 st topic of the choice topic, for example, the first topic score may be 2 points; if the question corresponding to the first question text is the 1 st question of the brief answer question, obtaining a first question score of the 1 st question of the brief answer question, for example, the first question score may be 8.
S2: and taking the product of the similarity and the score of the first topic as a first score corresponding to the first text information.
Illustratively, if the topic corresponding to the first topic text is the 1 st topic of the choice topic, and the first answer text is the same as the first standard answer text, the similarity is determined to be 1, and then the similarity and the first topic are determined to be 1The product of the scores, i.e. "1 x 2", is the first score. If the question corresponding to the first question text is the brief question No. 1, and the obtained similarity is K 1 Then the first score is "8 x K 1 ”。
S3: and summing a plurality of first scores corresponding to the plurality of text messages to obtain a first evaluation score.
Exemplarily, a method for obtaining a first score according to the similarity corresponding to the first text information and the score of the first question is used for calculating a plurality of similarities and a plurality of scores corresponding to a plurality of text information to obtain a plurality of first scores; and then summing up the plurality of first scores to serve as a first evaluation score.
204: and extracting the second video segment to obtain multiple frames of images to be identified.
Wherein, the operation process of a clinical appraisal item of a pet physician is recorded by a plurality of frames of images to be identified.
Illustratively, the plurality of frames of images to be identified records the operation of the pet physician closing the pet wound.
It should be noted that, in the present application, only one clinical evaluation item is evaluated by a pet physician in an evaluation process as an example, in practical applications, the pet physician may evaluate a plurality of clinical evaluation items, when evaluating a plurality of clinical evaluation items, a plurality of second video segments may be identified, and each second video segment records an operation process of one clinical evaluation item. The processing of each second video segment is similar to that in step 204 and will not be described.
205: and performing action analysis on the multiple frames of images to be recognized to obtain a second evaluation score.
In an embodiment of the present application, a method for performing motion analysis on multiple frames of images to be recognized to obtain a second evaluation score is provided, and the method specifically includes steps a 1-A3:
a1: and performing three-dimensional reconstruction on the multi-frame image to be identified to obtain a three-dimensional target image.
Wherein the three-dimensional target image includes an operation site in the clinical evaluation exercise item. In an embodiment of the present application, three-dimensional reconstruction is performed on multiple frames of images to be identified to obtain a three-dimensional target image, and the method specifically includes:
screening multiple frames of images to be identified to obtain at least one candidate image, wherein the candidate image is an image containing the whole operation part; carrying out contour scanning on the first candidate image to obtain an operation part contour image corresponding to the first candidate image, wherein the first candidate image is any one of at least one candidate image; determining a first position point from the operation part outline image, wherein the first position point is a contact point of a hand or a hand-held instrument and the operation part of a pet physician in a clinical evaluation item; determining at least one first position point from at least one operation part outline image corresponding to at least one candidate image, wherein the at least one first position point is in one-to-one correspondence with the at least one candidate image; sequentially connecting at least one first position point according to the time sequence of at least one candidate image, and determining the depth information of an operation part; and obtaining a three-dimensional target image according to the depth information and the operation part outline image.
For the convenience of understanding, the present application will be described by taking the example of suturing the abdomen of a pet. For example, because the multiple frames of images to be identified record the operation process of one clinical evaluation item of the pet physician, that is, record the whole process of the operation of the pet physician for suturing the abdominal wound of the pet, the multiple frames of images to be identified need to be screened to obtain at least one candidate image containing all the features of the operation part, and the candidate image contains the abdominal wound of the pet; for example, a first candidate image as shown in FIG. 4a may be screened; then carrying out contour scanning on the first candidate image to obtain a contour image of the operation part, namely a contour image of the wound part of the abdomen of the pet, wherein the contour image of the operation part is shown in fig. 4 b; then, a first position point, which is the contact point between the suturing apparatus held by the hand of the pet physician and the wound site, is determined from the outline image of the wound site, and is shown in fig. 4 c; then determining at least one first position point from at least one wound site contour image corresponding to the at least one candidate image according to a method for determining the first position point from the wound site contour image; then sequentially connecting at least one first position point according to the time sequence of at least one candidate image to determine the depth information of the wound part; and finally, constructing a three-dimensional target image according to the depth of the wound part and the wound part outline image.
A2: and marking the three-dimensional target image to obtain a plurality of key points aiming at the operation part.
In the embodiment of the application, the contact points of the hand or the hand-held instrument and the operation part in the clinical evaluation item of the pet physician in the three-dimensional target image are marked, and a plurality of marked contact points are determined as the key points for the operation part.
Illustratively, a plurality of contact points of a suturing implement held by a pet physician and the abdomen of the pet in the three-dimensional target image are identified, and the contact points are determined as a plurality of key points.
A3: and obtaining a second evaluation score according to the plurality of key points.
Based on this, in the embodiments of the present application, there is provided a method for obtaining a second evaluation score according to a plurality of key points, the method specifically includes steps B1-B6:
b1: and establishing a space rectangular coordinate system by taking the center of the operation part as the origin of coordinates.
For example, the center of the wound site may be found according to the depth of the wound site and the contour image of the wound site, and then a spatial rectangular coordinate system may be established with the center of the wound site as a coordinate origin, with the center of the wound site pointing straight ahead parallel to the ground as an X-axis, with the center of the wound site pointing horizontal right parallel to the ground as a Y-axis, and with the center of the wound site pointing straight above perpendicular to the ground as a Z-axis.
B2: and acquiring the coordinates of the first key point in the rectangular space coordinate system and the coordinates of the reference key point.
The first key point is any one of a plurality of key points, and the key point is a reference position corresponding to the first key point at the operation part;
illustratively, in the process of suturing the abdominal wound of the pet, in order to present better wound healing effect in practical application, there are a suture starting point, a suture ending point and a reference position of each suture point, that is, a reference position of a plurality of key points. Therefore, the coordinates of the first key point and the coordinates of the reference key point are obtained in the rectangular spatial coordinate system, and the coordinates of the reference key point are the preset reference position corresponding to the first key point at the operation position.
B3: and calculating the difference distance between the first key point and the reference key point according to the coordinates of the first key point and the coordinates of the reference key point.
B4: at least one outlier keypoint is obtained from the plurality of keypoints.
And the difference distance corresponding to any abnormal key point in the at least one abnormal key point is greater than or equal to a threshold value.
B5: and calculating the completion degree of the pet physician according to the number of the abnormal key points and the first time.
The finish degree represents the finish condition of a pet physician for the clinical evaluation item, and the first time is the time difference between the first time corresponding to the last frame of image to be identified and the second time corresponding to the first frame of image to be identified in the multiple frames of images to be identified.
Wherein, the completeness can be obtained by formula (2):
Figure BDA0003534231590000091
wherein, W is the completion degree, 0< c <1,0< d <1, x is the number of abnormal key points, t is the first time in seconds, i is the third weight coefficient, and j is the fourth weight coefficient.
It should be noted that the value interval of the completion degree obtained according to the formula (2) is [0,1], and when the number of abnormal key points is larger, the longer the first time is, the smaller the value of the completion degree obtained by the pet physician will be.
B6: and taking the product of the completion degree and the second topic score as a second evaluation score.
And the second topic score is a score corresponding to the clinical appraisal item. Illustratively, if the score of one clinical assessment item for suturing an abdominal wound of a pet is 30, i.e., the second topic score is 30, the second assessment score is "W30".
206: and determining the evaluation result of the pet physician according to the first evaluation score and the second evaluation score.
In an embodiment of the application, the first evaluation score and the second evaluation score are summed to obtain a target evaluation score; and then determining the evaluation result of the pet physician according to the target evaluation score. For example, if the target evaluation score is in the first interval, the evaluation result of the pet physician is excellent; if the target evaluation score is in the second interval, the evaluation result of the pet physician is good; if the target evaluation score is in the third interval, the evaluation result of the pet physician is qualified; and if the target evaluation score is in the fourth interval, the evaluation result of the pet physician is unqualified, namely the evaluation is not passed.
It can be seen that in the embodiment of the application, by acquiring the evaluation video of the pet physician; determining a first video segment and a second video segment from the evaluation video, wherein the first video segment is a video segment for theoretical evaluation of a pet physician, and the second video segment is a video segment for clinical operation evaluation of the pet physician; then carrying out voice analysis on the first video segment to obtain a first evaluation score; extracting the second video segment to obtain multiple frames of images to be identified, wherein the multiple frames of images to be identified record the operation process of one clinical evaluation item of a pet physician; then, performing action analysis on multiple frames of images to be identified to obtain a second evaluation score; and finally, obtaining the evaluation result of the pet physician according to the first evaluation score and the second evaluation score, and processing the evaluation video of the pet physician to obtain the corresponding evaluation result in the video-based evaluation mode, so that the evaluation process is simplified, the resources are saved, and the experience of the user is improved.
Referring to fig. 5, fig. 5 is a block diagram illustrating functional units of a physician evaluation apparatus according to an embodiment of the present disclosure. The physician evaluation apparatus 500 includes: an acquisition unit 501 and a processing unit 502;
the acquisition unit 501 is used for acquiring an evaluation video of a pet physician;
the pet evaluation device comprises a processing unit 502, a storage unit and a display unit, wherein the processing unit is used for determining a first video segment and a second video segment from an evaluation video, the first video segment is a video segment for theoretical evaluation of a pet physician, and the second video segment is a video segment for clinical operation evaluation of the pet physician;
carrying out voice analysis on the first video segment to obtain a first evaluation score;
extracting the second video segment to obtain multiple frames of images to be identified, wherein the multiple frames of images to be identified record the operation process of one clinical evaluation item of a pet physician;
performing action analysis on multiple frames of images to be identified to obtain a second evaluation score;
and determining the evaluation result of the pet physician according to the first evaluation score and the second evaluation score.
In an embodiment of the application, in terms of performing a speech analysis on the first video segment to obtain the first evaluation score, the processing unit 502 is specifically configured to:
performing voice recognition on the first video segment to obtain a voice data stream;
dividing the voice data stream according to preset answering time of each question in the theoretical evaluation to obtain at least one sub-voice stream, wherein each sub-voice stream in the at least one sub-voice stream corresponds to each question in the theoretical evaluation one by one;
converting at least one sub voice stream to obtain at least one text message, wherein the at least one text message corresponds to the at least one sub voice stream one to one;
obtaining similarity according to first text information and a first standard answer text, wherein the first text information is any one of at least one text information, and the first standard answer text is a correct answer of a question corresponding to the first text information;
and determining a first evaluation score according to the similarity corresponding to the first text information.
In an embodiment of the application, in terms of obtaining the similarity according to the first text information and the first standard answer text, the processing unit 502 is specifically configured to:
analyzing the first text information to obtain a first question text and a first answering text;
when the first question text is an objective question and the first answer text is the same as the first standard answer text, determining that the similarity is 1; and if the first answer text is not the same as the first standard answer text, determining that the similarity is 0.
In an embodiment of the application, in terms of obtaining the similarity according to the first text information and the first standard answer text, the processing unit 502 is specifically configured to:
analyzing the first text information to obtain a first question text and a first answering text;
when the first question text is a subjective question, extracting keywords from the first answer text to obtain a plurality of semantic keywords and a plurality of professional term keywords;
extracting keywords from the first standard answer text to obtain a standard semantic keyword library and a standard professional term keyword library, wherein the standard semantic keyword library is composed of a plurality of standard semantic keywords in the first standard answer text, and the standard professional term keyword library is composed of a plurality of standard professional term keywords in the first standard answer text;
screening a plurality of semantic keywords in a standard semantic keyword library to obtain at least one candidate semantic keyword;
screening a plurality of professional term keywords in a standard professional term keyword library to obtain at least one candidate professional term keyword;
acquiring a first ratio of the number of candidate semantic keywords to the number of standard semantic keywords and a second ratio of the number of candidate special term keywords to the number of standard special term keywords;
determining similarity according to the first ratio and the second ratio;
wherein, the similarity can be obtained by formula (3):
Figure BDA0003534231590000111
wherein, K is the similarity, M is the first ratio, N is the second ratio, P is the first weight coefficient, and Q is the second weight coefficient.
In an embodiment of the application, in determining the first evaluation score according to the similarity corresponding to the first text information, the processing unit 502 is specifically configured to:
acquiring a first topic score corresponding to the similarity;
taking the product of the similarity and the first topic score as a first score corresponding to the first text information;
and summing a plurality of first scores corresponding to the plurality of text messages to obtain a first evaluation score.
In an embodiment of the present application, in terms of performing motion analysis on multiple frames of images to be recognized to obtain a second evaluation score, the processing unit 502 is specifically configured to:
performing three-dimensional reconstruction on a plurality of frames of images to be identified to obtain a three-dimensional target image, wherein the three-dimensional target image comprises an operation part in a clinical examination and evaluation operation project;
marking the three-dimensional target image to obtain a plurality of key points aiming at the operation part;
and obtaining a second evaluation score according to the plurality of key points.
In an embodiment of the application, in obtaining the second evaluation score according to a plurality of key points, the processing unit 502 is specifically configured to:
establishing a space rectangular coordinate system by taking the center of the operation part as the origin of coordinates;
acquiring coordinates of a first key point in a rectangular spatial coordinate system and coordinates of a reference key point, wherein the first key point is any one of a plurality of key points, and the reference key point is a reference position corresponding to the first key point at an operation part;
calculating a difference distance between the first key point and the reference key point according to the coordinates of the first key point and the coordinates of the reference key point;
acquiring at least one abnormal key point from the plurality of key points, wherein the difference distance corresponding to any one abnormal key point in the at least one abnormal key point is greater than or equal to a threshold value;
according to the number of the abnormal key points and the first time, the completion degree of the pet physician is calculated, the completion degree represents the completion condition of the pet physician for the clinical evaluation item, and the first time is the time difference between the first time corresponding to the last frame of image to be identified and the second time corresponding to the first frame of image to be identified in the multiple frames of images to be identified;
taking the product of the completion degree and the second question score as a second appraisal score, wherein the second question score is a score corresponding to a clinical appraisal project;
wherein, the completeness can be obtained by formula (4):
Figure BDA0003534231590000112
wherein, W is the completion degree, 0< c <1,0< d <1, x is the number of abnormal key points, t is the first time in seconds, i is the third weight coefficient, and j is the fourth weight coefficient.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. As shown in fig. 6, the electronic device 600 includes a transceiver 601, a processor 602, and a memory 603. Connected to each other by a bus 604. The memory 603 is used to store computer programs and data, and can transfer data stored in the memory 603 to the processor 602.
The processor 602 is configured to read the computer program in the memory 603 and perform the following operations:
the control transceiver 601 acquires an evaluation video of a pet physician;
determining a first video segment and a second video segment from the evaluation video, wherein the first video segment is a video segment for theoretical evaluation of a pet physician, and the second video segment is a video segment for clinical operation evaluation of the pet physician;
carrying out voice analysis on the first video segment to obtain a first evaluation score;
extracting the second video segment to obtain multiple frames of images to be identified, wherein the multiple frames of images to be identified record the operation process of one clinical evaluation item of a pet physician;
performing action analysis on multiple frames of images to be identified to obtain a second evaluation score;
and determining the evaluation result of the pet physician according to the first evaluation score and the second evaluation score.
In an embodiment of the application, the processor 602 is specifically configured to perform the following steps in terms of performing a speech analysis on the first video segment to obtain the first evaluation score:
performing voice recognition on the first video segment to obtain a voice data stream;
dividing the voice data stream according to preset answering time of each question in the theoretical evaluation to obtain at least one sub-voice stream, wherein each sub-voice stream in the at least one sub-voice stream corresponds to each question in the theoretical evaluation one by one;
converting at least one sub voice stream to obtain at least one text message, wherein the at least one text message corresponds to the at least one sub voice stream one to one;
obtaining similarity according to the first text information and a first standard answer text, wherein the first text information is any one of at least one text information, and the first standard answer text is a correct answer of a question corresponding to the first text information;
and determining a first evaluation score according to the similarity corresponding to the first text information.
In an embodiment of the application, in obtaining the similarity according to the first text information and the first standard answer text, the processor 602 is specifically configured to perform the following steps:
analyzing the first text information to obtain a first question text and a first answering text;
when the first question text is an objective question and the first answer text is the same as the first standard answer text, determining that the similarity is 1; and if the first answer text is not the same as the first standard answer text, determining that the similarity is 0.
In an embodiment of the application, in obtaining the similarity according to the first text information and the first standard answer text, the processor 602 is specifically configured to perform the following steps:
analyzing the first text information to obtain a first question text and a first answering text;
when the first question text is a subjective question, extracting keywords from the first answer text to obtain a plurality of semantic keywords and a plurality of professional term keywords;
extracting keywords from the first standard answer text to obtain a standard semantic keyword library and a standard professional term keyword library, wherein the standard semantic keyword library is composed of a plurality of standard semantic keywords in the first standard answer text, and the standard professional term keyword library is composed of a plurality of standard professional term keywords in the first standard answer text;
screening a plurality of semantic keywords in a standard semantic keyword library to obtain at least one candidate semantic keyword;
screening a plurality of professional term keywords in a standard professional term keyword library to obtain at least one candidate professional term keyword;
acquiring a first ratio of the number of candidate semantic keywords to the number of standard semantic keywords and a second ratio of the number of candidate special term keywords to the number of standard special term keywords;
determining the similarity according to the first ratio and the second ratio;
determining similarity according to the number of the candidate semantic keywords and the number of the candidate professional term keywords;
wherein, the similarity can be obtained by formula (5):
Figure BDA0003534231590000131
wherein, K is the similarity, M is the first ratio, N is the second ratio, P is the first weight coefficient, Q is the second weight coefficient.
In an embodiment of the application, in determining the first evaluation score according to the similarity corresponding to the first text information, the processor 602 is specifically configured to perform the following steps:
acquiring a first topic score corresponding to the similarity;
taking the product of the similarity and the first topic score as a first score corresponding to the first text information;
and summing a plurality of first scores corresponding to the plurality of text messages to obtain a first evaluation score.
In an embodiment of the present application, in terms of performing motion analysis on multiple frames of images to be recognized to obtain a second evaluation score, the processor 602 is specifically configured to perform the following steps:
performing three-dimensional reconstruction on a plurality of frames of images to be identified to obtain a three-dimensional target image, wherein the three-dimensional target image comprises an operation part in a clinical examination and evaluation operation project;
marking the three-dimensional target image to obtain a plurality of key points aiming at the operation part;
and obtaining a second evaluation score according to the plurality of key points.
In one embodiment of the present application, in obtaining the second evaluation score according to a plurality of key points, the processor 602 is specifically configured to perform the following steps:
establishing a space rectangular coordinate system by taking the center of the operation part as the origin of coordinates;
acquiring coordinates of a first key point in a space rectangular coordinate system and coordinates of a reference key point, wherein the first key point is any one of a plurality of key points, and the reference key point is a reference position corresponding to the first key point at an operation part;
calculating a difference distance between the first key point and the reference key point according to the coordinates of the first key point and the coordinates of the reference key point;
acquiring at least one abnormal key point from the plurality of key points, wherein the difference distance corresponding to any one abnormal key point in the at least one abnormal key point is greater than or equal to a threshold value;
according to the number of the abnormal key points and the first time, the completion degree of the pet physician is calculated, the completion degree represents the completion condition of the pet physician for the clinical evaluation item, and the first time is the time difference between the first time corresponding to the last frame of image to be identified and the second time corresponding to the first frame of image to be identified in the multiple frames of images to be identified;
taking the product of the completion degree and the second topic score as a second evaluation score, wherein the second topic score is a score corresponding to a clinical evaluation item;
wherein, the completeness can be obtained by formula (6):
Figure BDA0003534231590000141
wherein, W is the completion degree, 0< c <1,0< d <1, x is the number of abnormal key points, t is the first time in seconds, i is the third weight coefficient, and j is the fourth weight coefficient.
Specifically, the transceiver 601 may be the obtaining unit 501 of the physician evaluation apparatus 500 of the embodiment of fig. 5, and the processor 602 may be the processing unit 502 of the physician evaluation apparatus 500 of the embodiment of fig. 5.
It should be understood that the electronic device in the present application may include a smart Phone (e.g., an Android Phone, an iOS Phone, a Windows Phone, etc.), a tablet computer, a palm computer, a notebook computer, a Mobile Internet device MID (MID), a wearable device, or the like. The above mentioned electronic devices are only examples, not exhaustive, and include but not limited to the above mentioned electronic devices. In practical applications, the electronic device may further include: intelligent vehicle-mounted terminals, computer equipment and the like.
Embodiments of the present application also provide a computer-readable storage medium, which stores a computer program, where the computer program is executed by a processor to implement part or all of the steps of any one of the physician evaluation methods as described in the above method embodiments.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the physician assessment methods as described in the above method embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit is merely a logical division, and an actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may be implemented in the form of a software program module.
The integrated units, if implemented in the form of software program modules and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing embodiments have been described in detail, and specific examples are used herein to explain the principles and implementations of the present application, where the above description of the embodiments is only intended to help understand the method and its core ideas of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. A physician assessment method, comprising:
acquiring an evaluation video of a pet physician;
determining a first video segment and a second video segment from the evaluation video, wherein the first video segment is a video segment of theoretical evaluation of the pet physician, and the second video segment is a video segment of clinical operation evaluation of the pet physician;
carrying out voice analysis on the first video segment to obtain a first evaluation score;
extracting the second video segment to obtain multiple frames of images to be identified, wherein the multiple frames of images to be identified record the operation process of one clinical evaluation item of the pet physician;
performing action analysis on the multiple frames of images to be recognized to obtain a second evaluation score;
and determining the evaluation result of the pet physician according to the first evaluation score and the second evaluation score.
2. The method of claim 1, wherein said performing a speech analysis on said first video segment to obtain a first appraisal score comprises:
performing voice recognition on the first video segment to obtain a voice data stream;
dividing the voice data stream according to preset answering time of each question in the theoretical evaluation to obtain at least one sub-voice stream, wherein each sub-voice stream in the at least one sub-voice stream corresponds to each question in the theoretical evaluation one by one;
converting the at least one sub voice stream to obtain at least one text message, wherein the at least one text message is in one-to-one correspondence with the at least one sub voice stream;
obtaining similarity according to first text information and the first standard answer text, wherein the first text information is any one of at least one text information, and the first standard answer text is a correct answer to a question corresponding to the first text information;
and determining the first evaluation score according to the similarity corresponding to the first text information.
3. The method according to claim 2, wherein obtaining the similarity according to the first text information and the first standard answer text comprises:
analyzing the first text information to obtain a first question text and a first answer text;
when the first question text is an objective question and the first answering text is the same as the first standard answer text, determining that the similarity is 1; and if the first answering text is not the same as the first standard answering text, determining that the similarity is 0.
4. The method according to claim 2, wherein obtaining the similarity according to the first text information and the first standard answer text comprises:
analyzing the first text information to obtain a first question text and a first answering text;
when the first question text is a subjective question, extracting keywords from the first answer text to obtain a plurality of semantic keywords and a plurality of professional term keywords;
extracting keywords from the first standard answer text to obtain a standard semantic keyword library and a standard professional term keyword library, wherein the standard semantic keyword library is composed of a plurality of standard semantic keywords in the first standard answer text, and the standard professional term keyword library is composed of a plurality of standard professional term keywords in the first standard answer text;
screening the semantic keywords in the standard semantic keyword library to obtain at least one candidate semantic keyword;
screening the plurality of professional term keywords in the standard professional term keyword library to obtain at least one candidate professional term keyword;
acquiring a first ratio of the number of the candidate semantic keywords to the number of the standard semantic keywords and a second ratio of the number of the candidate professional term keywords to the number of the standard professional term keywords;
determining the similarity according to the first ratio and the second ratio;
wherein the similarity satisfies the following formula:
Figure FDA0003534231580000021
wherein K is the similarity, M is the first ratio, N is the second ratio, P is a first weight coefficient, and Q is a second weight coefficient.
5. The method according to any one of claims 2-4, wherein determining the first evaluation score according to the similarity corresponding to the first text information comprises:
acquiring a first topic score corresponding to the similarity;
taking the product of the similarity and the first topic score as a first score corresponding to the first text information;
and summing a plurality of first scores corresponding to the plurality of text messages to obtain the first evaluation score.
6. The method according to any one of claims 1 to 5, wherein the performing motion analysis on the plurality of frames of images to be recognized to obtain a second evaluation score comprises:
performing three-dimensional reconstruction on the multiple frames of images to be identified to obtain a three-dimensional target image, wherein the three-dimensional target image comprises an operation part in the clinical examination and evaluation operation item;
marking the three-dimensional target image to obtain a plurality of key points aiming at the operation part;
and obtaining the second evaluation score according to the plurality of key points.
7. The method of claim 6, wherein said deriving said second evaluation score from said plurality of key points comprises:
establishing a space rectangular coordinate system by taking the center of the operation part as an origin of coordinates;
acquiring coordinates of a first key point in the rectangular spatial coordinate system and coordinates of a reference key point, wherein the first key point is any one of the key points, and the reference key point is a reference position corresponding to the first key point at the operation part;
calculating a difference distance between the first key point and the reference key point according to the coordinates of the first key point and the coordinates of the reference key point;
acquiring at least one abnormal key point from the plurality of key points, wherein the difference distance corresponding to any one abnormal key point in the at least one abnormal key point is greater than or equal to a threshold value;
calculating the completion degree of the pet physician according to the number of the abnormal key points and first time, wherein the completion degree characterizes the completion condition of the pet physician for the clinical appraisal project, and the first time is a time difference between a first time corresponding to a last frame image to be identified and a second time corresponding to a first frame image to be identified in the multiple frames of images to be identified;
taking the product of the completion degree and a second topic score as the second evaluation score, wherein the second topic score is a score corresponding to the clinical evaluation item;
wherein the completion satisfies the following formula:
Figure FDA0003534231580000031
wherein W is the completion, 0< c <1,0< d <1, x is the number of the abnormal key points, t is the first time in seconds, i is a third weight coefficient, and j is a fourth weight coefficient.
8. A physician assessment apparatus, said apparatus comprising: an acquisition unit and a processing unit;
the acquisition unit is used for acquiring an evaluation video of a pet physician;
the processing unit is used for determining a first video segment and a second video segment from the evaluation video, wherein the first video segment is a video segment of theoretical evaluation of the pet physician, and the second video segment is a video segment of clinical operation evaluation of the pet physician;
carrying out voice analysis on the first video segment to obtain a first evaluation score;
extracting the second video segment to obtain multiple frames of images to be identified, wherein the multiple frames of images to be identified record the operation process of one clinical evaluation item of the pet physician;
performing action analysis on the multiple frames of images to be recognized to obtain a second evaluation score;
and determining the evaluation result of the pet physician according to the first evaluation score and the second evaluation score.
9. An electronic device, comprising: a processor coupled to the memory, and a memory for storing a computer program, the processor being configured to execute the computer program stored in the memory to cause the electronic device to perform the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which is executed by a processor to implement the method according to any one of claims 1-7.
CN202210222588.XA 2022-03-07 2022-03-07 Physician evaluation method and device, electronic equipment and storage medium Pending CN114819474A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210222588.XA CN114819474A (en) 2022-03-07 2022-03-07 Physician evaluation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210222588.XA CN114819474A (en) 2022-03-07 2022-03-07 Physician evaluation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114819474A true CN114819474A (en) 2022-07-29

Family

ID=82528492

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210222588.XA Pending CN114819474A (en) 2022-03-07 2022-03-07 Physician evaluation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114819474A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948438A (en) * 2019-02-12 2019-06-28 平安科技(深圳)有限公司 Automatic interview methods of marking, device, system, computer equipment and storage medium
CN111626137A (en) * 2020-04-29 2020-09-04 平安国际智慧城市科技股份有限公司 Video-based motion evaluation method and device, computer equipment and storage medium
CN112233516A (en) * 2020-10-12 2021-01-15 萱闱(北京)生物科技有限公司 Grading method and system for physician CPR examination training and examination
CN112418113A (en) * 2020-11-26 2021-02-26 中国人民解放军陆军军医大学第一附属医院 Medical skill examination system
CN112601048A (en) * 2020-12-04 2021-04-02 抖动科技(深圳)有限公司 Online examination monitoring method, electronic device and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109948438A (en) * 2019-02-12 2019-06-28 平安科技(深圳)有限公司 Automatic interview methods of marking, device, system, computer equipment and storage medium
CN111626137A (en) * 2020-04-29 2020-09-04 平安国际智慧城市科技股份有限公司 Video-based motion evaluation method and device, computer equipment and storage medium
CN112233516A (en) * 2020-10-12 2021-01-15 萱闱(北京)生物科技有限公司 Grading method and system for physician CPR examination training and examination
CN112418113A (en) * 2020-11-26 2021-02-26 中国人民解放军陆军军医大学第一附属医院 Medical skill examination system
CN112601048A (en) * 2020-12-04 2021-04-02 抖动科技(深圳)有限公司 Online examination monitoring method, electronic device and storage medium

Similar Documents

Publication Publication Date Title
Alzubaidi et al. DFU_QUTNet: diabetic foot ulcer classification using novel deep convolutional neural network
CN108491486B (en) Method, device, terminal equipment and storage medium for simulating patient inquiry dialogue
Grasseni Video and ethnographic knowledge
JP2002511159A (en) Computerized medical diagnosis system using list-based processing
Udelson et al. Return to play for athletes after COVID-19 infection: the fog begins to clear
CN113505662B (en) Body-building guiding method, device and storage medium
CN109935317A (en) Artificial intelligence health cloud platform
Ribeiro et al. Lifelog retrieval from daily digital data: narrative review
WO2024092955A1 (en) Medical training assessment evaluation method and apparatus, and electronic device and storage medium
CN117036126B (en) College student comprehensive quality management system and method based on data analysis
van Biemen et al. Into the eyes of the referee: A comparison of elite and sub-elite football referees’ on-field visual search behaviour when making foul judgements
CN112149602A (en) Action counting method and device, electronic equipment and storage medium
CN113517064A (en) Depression degree evaluation method, system, device and storage medium
CN109003492B (en) Topic selection device and terminal equipment
Blix et al. Digitalization and health care
CN111063455A (en) Human-computer interaction method and device for telemedicine
CN114819474A (en) Physician evaluation method and device, electronic equipment and storage medium
WO2018000248A1 (en) Multi-user management method and system for smart body building apparatus
CN115886833A (en) Electrocardiosignal classification method and device, computer readable medium and electronic equipment
CN113656638B (en) User information processing method, device and equipment for watching live broadcast
CN113782146B (en) Artificial intelligence-based general medicine recommendation method, device, equipment and medium
Viviers et al. The diagnostic utility of computer-assisted auscultation for the early detection of cardiac murmurs of structural origin in the periodic health evaluation
CN115455439A (en) Block chain-based training data processing method, device and equipment
CN103136277A (en) Multimedia file playing method and electronic device
CN112182282A (en) Music recommendation method and device, computer equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination