CN115798040A - Automatic segmentation system for cardio-pulmonary resuscitation AI - Google Patents

Automatic segmentation system for cardio-pulmonary resuscitation AI Download PDF

Info

Publication number
CN115798040A
CN115798040A CN202211475077.5A CN202211475077A CN115798040A CN 115798040 A CN115798040 A CN 115798040A CN 202211475077 A CN202211475077 A CN 202211475077A CN 115798040 A CN115798040 A CN 115798040A
Authority
CN
China
Prior art keywords
video
segmentation
action
node
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211475077.5A
Other languages
Chinese (zh)
Other versions
CN115798040B (en
Inventor
舒华俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Ruixing Information Technology Co ltd
Original Assignee
Guangzhou Ruixing Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Ruixing Information Technology Co ltd filed Critical Guangzhou Ruixing Information Technology Co ltd
Priority to CN202211475077.5A priority Critical patent/CN115798040B/en
Publication of CN115798040A publication Critical patent/CN115798040A/en
Application granted granted Critical
Publication of CN115798040B publication Critical patent/CN115798040B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a cardio-pulmonary resuscitation AI automatic segmentation system, comprising: the motion recognition module is used for determining the ending gesture of each operation in the cardiopulmonary resuscitation and determining the ending frame of each operation based on the ending gesture to obtain a first segmentation node; the semantic detection module is used for extracting video semantics of an original video to obtain audio features, and detecting the first segmentation node based on the audio features to obtain a detection result; and the video segmentation module is used for segmenting the original video and automatically archiving video segmentation information according to the detection result. The invention automatically segments the cardiopulmonary resuscitation teaching video or the exercise video, so that the cardiopulmonary resuscitation teaching video can be segmented for the convenience of targeted learning of repeated watching of certain cardiopulmonary resuscitation operation by students according to the learning requirements of the students, the segmentation of the student exercise video is convenient for the teachers to watch and grade, and the improvement suggestion can be provided for the teachers to find the action problems of the cardiopulmonary resuscitation operation of the students in time.

Description

Automatic segmentation system for cardio-pulmonary resuscitation AI
Technical Field
The invention relates to the technical field of information, in particular to an automatic AI (artificial intelligence) segmentation system for cardio-pulmonary resuscitation.
Background
Clinically, sudden cardiac arrest is a serious condition, and the atlantan of the patient is scattered when the patient is not well treated. Clinical studies can confirm that more than 50% of people can be rescued successfully if more accurate cardiopulmonary resuscitation is performed actively within 4 minutes. If more than 10 minutes are spent, effective cardiopulmonary resuscitation is still not undertaken, and the probability that such a situation can come is very poor, almost not as great as 5%, and thus effective cardiopulmonary resuscitation can be used to gain valuable rescue time for the patient. Therefore, the cardio-pulmonary resuscitation is an important timely first-aid means, the accuracy of the cardio-pulmonary resuscitation operation posture must be accurately ensured in the teaching process, but due to the teaching class time limit and the number limit, the teacher cannot really give one-to-one guidance to each student, and therefore the invention provides the cardio-pulmonary resuscitation AI automatic segmentation system to solve the problems.
Disclosure of Invention
The invention provides an automatic segmentation system for cardio-pulmonary resuscitation (AI), which automatically segments a teaching video or a practice video for cardio-pulmonary resuscitation, facilitates the students to perform targeted learning of repeatedly watching certain cardio-pulmonary resuscitation operation according to the learning requirements by segmenting the teaching video for cardio-pulmonary resuscitation, facilitates the teachers to watch and grade the segmentation of the practice video for the students, is beneficial to the teachers to find the action problem of the cardio-pulmonary resuscitation operation of the students in time, provides improvement suggestions, ensures the accuracy and the qualification of the action of the cardio-pulmonary resuscitation operation of the students, and is beneficial to improving the success rate of the cardio-pulmonary resuscitation. The segmentation of the student exercise video is also beneficial to the targeted playback of the student after the exercise is completed, and the learning efficiency is improved.
The invention provides a cardio-pulmonary resuscitation AI automatic segmentation system, comprising:
the motion recognition module is used for determining the ending gesture of each operation in the cardiopulmonary resuscitation original video, and determining the ending frame of each operation based on the ending gesture to obtain a first segmentation node;
the semantic detection module is used for extracting video semantics of an original video, obtaining audio features, and detecting the first segmentation nodes based on the audio features to obtain detection results;
and the video segmentation module is used for segmenting the original video and automatically archiving video segmentation information according to the detection result.
Preferably, the action recognition module includes:
the video processing unit is used for performing framing processing on an original video to obtain a plurality of video frames, and performing key highlighting processing on each video frame to obtain a video frame to be processed;
the action marking unit is used for marking key points of each gesture on the video frame to be processed to obtain a marked video frame and adding an operation name to the marked video frame based on a preset cardio-pulmonary resuscitation operation name;
and the action segmentation unit is used for establishing an action decomposition set based on the operation name and determining a first segmentation node corresponding to the original video image according to the division result of the action decomposition set.
Preferably, the video processing unit includes:
the characteristic acquisition subunit is used for acquiring a plurality of video frames, respectively extracting first posture characteristics of a demonstrator on the plurality of video frames, and simultaneously acquiring second posture characteristics of all postures corresponding to the cardio-pulmonary resuscitation operation based on a standard action view library;
the screening subunit is used for screening the plurality of video frames based on the first posture characteristic and the second posture characteristic to obtain an effective video frame and an ineffective video frame, and hiding the image corresponding to the ineffective video frame after determining a first time node corresponding to the ineffective video frame;
and the preprocessing subunit is used for performing key highlighting processing on the effective video frame to obtain a video frame to be processed.
Preferably, the action marking unit includes:
the action authentication subunit is used for marking key points of the current posture of the demonstrator on the video frame to be processed based on the preset skeleton positioning, and connecting the key points to obtain a first action view;
comparing the first action view with a standard view in a standard action view library to obtain a similar gesture, and when the similar gesture is unique, judging that the current gesture is the similar gesture;
otherwise, acquiring a second action view corresponding to the adjacent video frame to be processed of the video frame to be processed, comparing the first action view with the second action view, determining the position change characteristic of the current posture, screening similar postures based on the position change characteristic to obtain a target posture, and judging that the current posture is the target posture.
Preferably, the action marking unit further includes:
the system comprises a standard action view library, an ending frame selection subunit and a third action view selection subunit, wherein the standard action view library is used for acquiring an ending gesture of each complete action in the cardiopulmonary resuscitation and a third action view corresponding to the ending gesture, and determining an ending frame of each segmental action of the cardiopulmonary resuscitation according to the third action view;
and the result verification subunit is used for acquiring the next marked video frame corresponding to each end frame, performing action continuous demonstration on each end frame and the next marked video frame, and determining the correlation between the first operation corresponding to the current end frame and the second operation corresponding to the next marked video frame according to the demonstration result.
Preferably, the result verification subunit further includes:
a verification judgment subunit, configured to judge that the first operation is not related to the second operation when the demonstration action is discontinuous;
otherwise, acquiring a target ending frame corresponding to the next marked video frame, judging that the first operation and the second operation are the same action continuous operation when the target ending frame is the same as the action view of the current ending frame, and adding a sequence label to an operation name corresponding to the first operation and the second operation;
and when the action views corresponding to the target ending frame and the current ending frame are different, judging that the first operation and the second operation are irrelevant.
Preferably, the semantic detection module includes:
the semantic recognition unit is used for performing semantic recognition on the audio of the original video based on the video semantic understanding model and recognizing semantic keywords according to the name of a preset cardio-pulmonary resuscitation action;
the semantic segmentation unit is used for really segmenting a second node of the original video and audio based on the semantic keywords;
the checking unit is used for comparing the second segmentation node with the first segmentation node and judging the first segmentation node as a correct segmentation node when the second segmentation node is consistent with the first segmentation node;
otherwise, the first segmentation node is a to-be-processed segmentation node.
Preferably, the inspection unit includes:
the first processing subunit is used for acquiring video semantics corresponding to the segmentation nodes to be processed, judging whether the operation corresponding to the segmentation nodes to be processed is repeated for multiple times or not based on the video semantics, and if so, judging the segmentation nodes to be processed to be correct segmentation nodes;
otherwise, acquiring a first video segment corresponding to the segmentation node to be processed and a second video segment corresponding to a second segmentation node corresponding to the segmentation node to be processed;
and when the time of the first video segment is within the time segment corresponding to the second video segment, taking the second segmentation node as a correct segmentation point.
Preferably, the inspection unit further includes:
the second processing subunit is used for acquiring the missing video segment when the time of the first video segment is not within the time interval corresponding to the second video segment, acquiring the starting time corresponding to the second video segment and the starting time corresponding to the first video segment to calculate a time error value when the video frames corresponding to the missing video segment are all invalid videos, correcting the time corresponding to the first segmentation node based on the time error value, and determining the time corresponding to the correct segmentation node;
when an effective video exists in a video frame corresponding to a missing video segment, the first segmentation node is judged to be a correct segmentation node, and the original video has an explanation lag, and after a lag identifier is added, the explanation content of the video segment corresponding to the second segmentation node is obtained, and is matched with the first video segment after being subjected to speech rate adjustment.
Preferably, the video segmentation module includes:
the segmentation unit is used for determining a time node corresponding to the correct segmentation node according to the detection result, and performing segmentation processing on the original video based on the time node to obtain a final video segment;
the tag adding unit is used for adding operation names to the final video segments based on the cardio-pulmonary resuscitation operation flow, generating a segment directory according to the video time axis and obtaining segment results;
and the archiving unit is used for compressing and storing the original video and the segmentation result.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic structural diagram of an AI automatic segmentation system for cardiopulmonary resuscitation according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an action recognition module of an AI automatic segmentation system according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a semantic detection module of an AI automatic segmentation system according to an embodiment of the invention;
fig. 4 is a schematic structural diagram of a video segmentation module of an automatic segmentation system for cardiopulmonary resuscitation AI according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Example 1:
the invention provides an automatic segmentation system for cardio-pulmonary resuscitation (AI), as shown in fig. 1, comprising:
the motion recognition module is used for determining the ending gesture of each operation in the cardiopulmonary resuscitation original video, and determining the ending frame of each operation based on the ending gesture to obtain a first segmentation node;
the semantic detection module is used for extracting video semantics of an original video to obtain audio features, and detecting the first segmentation node based on the audio features to obtain a detection result;
and the video segmentation module is used for segmenting the original video and automatically archiving video segmentation information according to the detection result.
In this example, the correct procedure for cardiopulmonary resuscitation was as follows: 1. it is determined whether the surrounding environment is safe. 2. The patient is assessed for conscious judgment. 3. And (6) judging respiration. Whether the thorax of the patient fluctuates or not and whether the nasal wings flare or not are judged for 5 seconds or not. 4. And judging whether the carotid artery pulsation exists or not. The index finger and the middle finger of the right hand were used to open the two fingers from the middle of the trachea to confirm the presence or absence of pulsation in the depression of the anterior border of the sternocleidomastoid muscle. 5. Cardiopulmonary resuscitation is performed. Firstly, the external chest compression part is at the midpoint of the connecting line of two nipples and at the 1/3 junction of the middle and lower parts of the sternum, the palm root is tightly attached to the chest of a patient, the two hands are overlapped, the five fingers are buckled with each other, the fingers are tilted, the elbow joint is straightened, and the external chest compression part is vertically pressed for 30 times by the weight of the upper body. 6. Opening an airway, enabling a patient to be placed in a supine position to see whether false teeth exist or not, enabling the head to be deviated to one side, cleaning oral and nasal secretions, resetting the head, performing a head raising jaw method, opening the airway, and performing artificial ventilation. CPR operation key points: the ratio of compression to artificial ventilation was 30:2, for 5 cycles, about 2 minutes. Judging whether cardiopulmonary resuscitation is effective, can palpate the carotid artery, has systolic pressure above 60mmHg, reduces pupil greatly, recovers light reflex, makes lips and nail bed become red through cyanosis, recovers spontaneous respiration, arranges patients, and closely monitors vital signs of the patients.
In this embodiment, the first segmentation node is an action segmentation point determined according to an original video image;
in this embodiment, the audio feature refers to a semantic feature of an audio expression corresponding to an original video.
In this embodiment, the original video refers to a cardiopulmonary resuscitation video that is not subjected to segmentation processing, and includes a teacher's teaching video and a student's practice video.
The beneficial effects of the above embodiment are as follows: the method comprises the steps that an ending gesture of each cardio-pulmonary resuscitation operation of a demonstrator in an original video is determined through an action recognition module, an ending frame of each operation is determined based on the ending gesture, and an operation action division point of the demonstrator is obtained; and then extracting the video semantics of the original video through a semantic detection module to obtain audio features, and detecting the first segmentation nodes based on the audio features to obtain a detection result. And the video segmentation module is used for segmenting the original video and automatically archiving video segmentation information according to the detection result. The invention automatically segments the cardio-pulmonary resuscitation teaching video or the exercise video, so that the cardio-pulmonary resuscitation teaching video is segmented to facilitate the targeted learning of repeated watching of a certain cardio-pulmonary resuscitation operation by students according to the self learning requirements, and the segmentation of the student exercise video is convenient for the teachers to watch and grade, thereby being beneficial to the teachers to find the action problem of the cardio-pulmonary resuscitation operation of the students in time and provide improvement suggestions, ensuring the accuracy and the qualification of the cardio-pulmonary resuscitation operation action of the students and being beneficial to improving the success rate of the cardio-pulmonary resuscitation. The segmentation of the student exercise video is also beneficial to the targeted playback of the student after the exercise is completed, and the learning efficiency is improved.
Example 2:
on the basis of embodiment 1, the action recognition module, as shown in fig. 2, includes:
the video processing unit is used for performing framing processing on an original video to obtain a plurality of video frames, and performing key highlighting processing on each video frame to obtain a video frame to be processed;
the action marking unit is used for marking key points of each gesture on the video frame to be processed to obtain a marked video frame and adding an operation name to the marked video frame based on a preset cardio-pulmonary resuscitation operation name;
and the action segmentation unit is used for establishing an action decomposition set based on the operation name and determining a first segmentation node corresponding to the original video image according to the division result of the action decomposition set.
In this embodiment, the emphasis processing means performing blurring processing on the background in the image corresponding to each video frame, so as to reduce the environmental interference and highlight the operation posture of the demonstrator.
In this embodiment, the video frame to be processed refers to a video frame subjected to emphasis processing.
In this embodiment, marking a video frame refers to a video frame to be processed in which an operation gesture of a presenter is marked.
In this embodiment, the preset names of the cardiopulmonary resuscitation operations include determining whether the surrounding environment is safe, determining consciousness, determining respiration, determining whether carotid pulsation exists, calling for help on site, performing chest compression, opening an airway, and performing artificial ventilation.
In this embodiment, the adding of the operation name to the tagged video frame is performed according to the operation name carried by the target pose corresponding to the current pose or the unique similar pose, where the operation name is determined by the tagged video frame.
In this embodiment, the action decomposition set refers to a set constructed by all the marked video frames corresponding to each operation, and the same operation may occur multiple times in the whole cardiopulmonary resuscitation process, for example, chest compression and manual ventilation operations occur alternately, and one action decomposition set may be formed by completing the corresponding marked video frames every time.
The beneficial effects of the above technical scheme are as follows: according to the method, the original video is subjected to framing processing to obtain a plurality of video frames, each video frame is subjected to key highlighting processing to obtain a video frame to be processed, the operation posture of a demonstrator with prominent environmental interference is reduced, the accuracy of selecting the posture key points of the action marking unit is improved, the marked video frames are added with operation names based on the preset cardio-pulmonary resuscitation operation names, and the action decomposition set is established to be beneficial to quickly determining the first segmentation node.
Example 3:
on the basis of embodiment 2, a video processing unit includes:
the characteristic acquisition subunit is used for acquiring a plurality of video frames, respectively extracting first posture characteristics of a demonstrator on the plurality of video frames, and simultaneously acquiring second posture characteristics of all postures corresponding to the cardio-pulmonary resuscitation operation based on a standard action view library;
the screening subunit is used for screening the plurality of video frames based on the first posture characteristic and the second posture characteristic to obtain effective video frames and invalid video frames, and hiding the images corresponding to the invalid video frames after determining the first time nodes corresponding to the invalid video frames;
and the preprocessing subunit is used for performing emphasis processing on the effective video frame to obtain a video frame to be processed.
In this embodiment, the demonstration staff includes teachers teaching cardiopulmonary resuscitation and students learning or demonstrating cardiopulmonary resuscitation.
In this embodiment, the first pose characteristic is a characteristic of a presentation pose corresponding to each video frame after the original video is subjected to framing processing.
In this embodiment, the second posture characteristic is a standard posture characteristic of all postures in the cardiopulmonary resuscitation process.
In this embodiment, the valid video frame refers to a video frame corresponding to all the operation actions related to cardiopulmonary resuscitation in the original video; the invalid video frame refers to a video frame corresponding to all operation actions irrelevant to cardiopulmonary resuscitation in the original video, for example, a pause process between two operation processes of judging whether the surrounding environment of a demonstrator is safe or not and consciousness.
In this embodiment, the first time node refers to a time point of an invalid video frame in an original video.
The beneficial effects of the above technical scheme are as follows: the method determines effective video frames and ineffective video frames by comparing the characteristics of the operation postures corresponding to a plurality of video frames corresponding to the original video with the standard characteristics of the operation postures corresponding to the standard action view library, hides the images corresponding to the ineffective video frames, eliminates ineffective operation, simplifies the content of the original video, reduces interference in the video segmentation process, and is beneficial to improving the speed for determining the first segmentation node of the altitude video.
Example 4:
on the basis of embodiment 2, the action marking unit includes:
the action authentication subunit is used for positioning key points for marking the current posture of the demo staff on the video frame to be processed based on the preset skeleton, and connecting the key points to obtain a first action view;
comparing the first action view with a standard view in a standard action view library to obtain a similar gesture, and when the similar gesture is unique, judging that the current gesture is the similar gesture;
otherwise, acquiring a second action view corresponding to the adjacent video frame to be processed of the video frame to be processed, comparing the first action view with the second action view, determining the position change characteristic of the current posture, screening similar postures based on the position change characteristic to obtain a target posture, and judging that the current posture is the target posture.
In this embodiment, the preset bone positioning refers to positioning positions of the cpr operation action demonstrated by the demonstration staff and positioning point distribution of each positioning position, where the positioning positions include a positioned joint position and an unarticulated position.
In this embodiment, the current gesture refers to an operation gesture on a currently positioned video frame to be processed.
In this embodiment, the first action view refers to an action visualization image corresponding to the current gesture represented by the key points and the connecting lines between the key points.
In this embodiment, the standard action view library refers to a database in which the system stores standard action views of each operation posture of the cardiopulmonary resuscitation in advance.
In this embodiment, the second view refers to an action visualization image corresponding to a video frame to be processed adjacent to the video frame to be processed.
In this embodiment, the position change feature is a posture action change situation that is a change situation of a current posture and a posture corresponding to an adjacent to-be-processed video frame thereof compared with a change situation of a key point.
In this embodiment, the target gesture refers to a gesture corresponding to the current gesture.
The beneficial effects of the above technical scheme are that: the method and the device authenticate the current gesture and determine the gesture corresponding to the current gesture in the standard action view library, thereby providing a basis for adding the operation name of the marked video frame.
Example 5:
on the basis of embodiment 4, the action authentication subunit includes:
comparing the first action view with the standard view in the standard action view library to obtain a similar gesture
And the screening subunit is used for calculating the operation angle of each positioned joint of the demonstrator based on the determined coordinates based on the positioning coordinate system established by the first action view:
Figure BDA0003959055450000101
wherein,
Figure BDA0003959055450000103
the operating angle of the ith positioned joint of the demonstration person is shown; (x) i ,y i ) The coordinates of the ith positioned joint of the demonstration person in a positioning coordinate system are represented; (x) i,a ,y i,a ) The coordinates of a key point connected with the ith positioning point of the positioned joint through a key point connecting line in the first action view are represented; (x) i,b ,y i,b ) The coordinates of another key point connected with the ith positioned joint positioning point through a key point connecting line in the first action view are represented; epsilon represents the coordinate positioning error, and the value is (0, 0.05);
acquiring the operating angles of all positioned joint points, and calculating the similarity between the first action view and a standard view in a standard action view library according to the following formula:
Figure BDA0003959055450000102
wherein S is j Representing the similarity between the first action view and the jth standard view in the standard action view library; v represents the total number of standard views in the standard action view library; m represents the total number of located joints in the first action view, including a finger joint, an elbow joint, a shoulder joint, a neck, and a head; mu.s j,i The operation angle of the ith positioned joint of the jth standard view in the standard action view library is represented; omega i The weight value of the ith positioned joint in the first action view is represented, and the value is (0, 1); gamma represents the value of a preset shooting error factor of (0, 0.25);
and when the similarity between the first action view and the jth standard view in the standard action view library is greater than a preset value, judging that the gesture corresponding to the jth standard view is a similar gesture of the current gesture.
The beneficial effects of the above embodiment: the first action view is compared with the standard view in the standard action view library to obtain a similar gesture, so that a basis is provided for action authentication, and a selection basis is provided for adding operation names of all video frames.
Example 6:
in embodiment 4, the action marking unit further includes:
the system comprises a standard action view library, an ending frame selection subunit and a third action view selection subunit, wherein the standard action view library is used for acquiring an ending gesture of each complete action in the cardiopulmonary resuscitation and a third action view corresponding to the ending gesture, and determining an ending frame of each segmental action of the cardiopulmonary resuscitation according to the third action view;
and the result verification subunit is used for acquiring the next marked video frame corresponding to each end frame, performing action continuous demonstration on each end frame and the next marked video frame, and determining the correlation between the first operation corresponding to the current end frame and the second operation corresponding to the next marked video frame according to the demonstration result.
In this embodiment, the third action view refers to an action visualization image corresponding to an ending gesture of each complete action in the cardiopulmonary resuscitation.
In this embodiment, the first operation refers to an operation corresponding to the current end frame, for example, the current end frame belongs to chest compression; the second operation is an operation corresponding to a next marked video frame corresponding to the current ending frame.
The beneficial effects of the above technical scheme are that: the method comprises the steps of acquiring an ending gesture of each complete action in the cardiopulmonary resuscitation and a third action view corresponding to the ending gesture based on a standard action view library, determining an ending frame of each segmental action of the cardiopulmonary resuscitation according to the third action view, acquiring a next marked video frame corresponding to each ending frame, continuously demonstrating actions of each ending frame and the next marked video frame, determining the correlation between a first operation corresponding to the current ending frame and a second operation corresponding to the next marked video frame according to a demonstration result, ensuring the adding accuracy of operation names of the marked video frames, and avoiding the phenomenon that the same action is continuously carried out, but the same action is divided into a same action decomposition set due to the same corresponding operation name, so that the first segmentation node is wrongly divided, and the times of the same action continuous operation cannot be reflected.
Example 7:
on the basis of embodiment 6, the result verification subunit further includes:
the verification judging subunit is used for judging that the first operation is irrelevant to the second operation when the demonstration action is discontinuous;
otherwise, acquiring a target end frame corresponding to the next marked video frame, judging that the first operation and the second operation are the same action continuous operation when the action views of the target end frame and the current end frame are the same, and adding a sequence label to the operation name corresponding to the first operation and the second operation;
and when the action views corresponding to the target ending frame and the current ending frame are different, judging that the first operation and the second operation are irrelevant.
In this embodiment, when the discontinuity of the presentation action refers to that the first operation and the second operation are determined to be not consecutive due to the concealment of the invalid video frame, it indicates that the first operation and the second operation are two different types of operations.
In this embodiment, the target end frame refers to an end frame of a next marked video frame corresponding to the current marked video frame.
In this embodiment, the sequence tag is to add a tag indicating the sequence of the two operations to the two operations according to the time axis of the video when the current first operation and the current second operation are the same-action continuous operations, and the order tag is to avoid dividing the two same operations into one set when an action decomposition set is established before the change.
The beneficial effects of the above technical scheme are that: the invention judges the correlation between adjacent operations, avoids the error of selecting the first segmentation node caused by the division value of the same action continuous operation and the same action decomposition set, can divide the same action which is continuously carried out according to the operation times, is convenient for teachers and students to quickly judge whether the cardio-pulmonary resuscitation is standard, for example, when carrying out single-person cardio-pulmonary resuscitation, the proportion of chest compression and artificial respiration is 30, namely, 2 times of artificial respiration are carried out after 30 times of heart compression, and the division of the same action continuous operation can quickly determine the continuous chest compression times and the artificial respiration times, thereby being beneficial to improving the evaluation efficiency of the teachers on the cardio-pulmonary resuscitation operation of the students and being beneficial to the self-check of the students on the cardio-pulmonary resuscitation operation of the students.
Example 8:
on the basis of embodiment 1, the semantic detection module, as shown in fig. 3, includes:
the semantic recognition unit is used for performing semantic recognition on the audio of the original video based on the video semantic understanding model and recognizing semantic keywords according to the name of a preset cardio-pulmonary resuscitation action;
the semantic segmentation unit is used for really segmenting a second node of the original video and audio based on the semantic keywords;
the checking unit is used for comparing the second segmentation node with the first segmentation node and judging the first segmentation node as a correct segmentation node when the second segmentation node is consistent with the first segmentation node;
otherwise, the first segmentation node is a to-be-processed segmentation node.
In this embodiment, the semantic keyword refers to an audio keyword of an original video, such as a patient consciousness condition, a patient respiration condition, and the like.
In this embodiment, the second segmentation node is a start node of each segment of the cardiopulmonary resuscitation determined according to a sound emitted by a presenter in the original video.
In this embodiment, the demonstrator repeats the operation name before each operation step.
In this embodiment, the correct segmentation node is a correct segmentation point corresponding to the original video corresponding to the cardiopulmonary resuscitation; the segmentation node to be processed refers to the first segmentation node which is uncertain whether the node is correct or not.
The beneficial effects of the above technical scheme are that: the method identifies the original video semantics based on the video semantic understanding model, and determines the second segmentation node of the original video audio, so that the first segmentation node is verified, and the segmentation accuracy of the cardiopulmonary resuscitation original video is improved.
Example 9:
on the basis of embodiment 8, a test cell comprising:
the first processing subunit is used for acquiring video semantics corresponding to the segmentation nodes to be processed, judging whether the operation corresponding to the segmentation nodes to be processed is repeated for multiple times or not based on the video semantics, and if so, judging that the segmentation nodes to be processed are correct segmentation nodes;
otherwise, acquiring a first video segment corresponding to the segmentation node to be processed and a second video segment corresponding to a second segmentation node corresponding to the segmentation node to be processed;
and when the time of the first video segment is within the time segment corresponding to the second video segment, taking the second segmentation node as a correct segmentation point.
In this embodiment, the video semantics refers to semantics of acquiring an audio corresponding to a segmentation node to be processed, for example, chest compression is continuously and repeatedly operated in a short time, so that the same chest compression semantics may correspond to multiple chest compression operations, that is, a plurality of first segmentation nodes may exist in a video segment corresponding to the segmentation node to be processed, and at this time, the first segmentation node is a correct segmentation node.
In this embodiment, the first video segment refers to a video segment corresponding to a segmentation node to be processed.
In this embodiment, the second video segment refers to a video segment corresponding to the second segmentation node corresponding to the segmentation node to be processed.
The beneficial effects of the above technical scheme are as follows: the method judges the attribute of the segmentation node to be processed (whether the operation corresponding to the segmentation node to be processed is a repeated operation step for a plurality of times), determines whether the segmentation node to be processed is a correct segmentation node, ensures the split of each complete action in the repeated operation step for a plurality of operations, and provides a basis for the evaluation of cardiopulmonary resuscitation; meanwhile, the accuracy of video segmentation is improved by verifying the segmentation nodes to be processed.
Example 10:
on the basis of embodiment 9, the inspection unit further includes:
the second processing subunit is used for acquiring the missing video segments when the time of the first video segments is not within the time division corresponding to the second video segments, acquiring the starting time corresponding to the second video segments and the starting time corresponding to the first video segments when the video frames corresponding to the missing video segments are all invalid videos, calculating a time error value, correcting the time corresponding to the first segmentation nodes based on the time error value, and determining the time corresponding to the correct segmentation nodes;
when an effective video exists in a video frame corresponding to a missing video segment, the first segmentation node is judged to be a correct segmentation node, and the original video has an explanation lag, and after a lag identifier is added, the explanation content of the video segment corresponding to the second segmentation node is obtained, and is matched with the first video segment after being subjected to speech rate adjustment.
In this embodiment, a missing video segment refers to a video segment in which the first segmented video is different from the second segmented video.
The beneficial effects of the above technical scheme are as follows: when the time of a first video segment is not within the time segment corresponding to a second video segment, acquiring a missing video segment, and when video frames corresponding to the missing video segment are all invalid videos, acquiring an error value corresponding to the starting time of the second video segment and the starting time of the first video segment, correcting the time corresponding to the first segmentation node based on the error value, and determining the time corresponding to a correct segmentation node; when an effective video exists in a video frame corresponding to a missing video segment, the first segmentation node is judged to be a correct segmentation node, and the original video has explanation lag, after a lag identifier is added, the explanation content of the video segment corresponding to the second segmentation node is obtained, is subjected to speech rate adjustment and then is matched with the first video segment, and the consistency of the audio and the action of the video segment corresponding to non-repeated operation is ensured.
Example 11:
on the basis of embodiment 1, the video segmentation module, as shown in fig. 4, includes:
the segmentation unit is used for determining a time node corresponding to the correct segmentation node according to the detection result, and performing segmentation processing on the original video based on the time node to obtain a final video segment;
the tag adding unit is used for adding operation names to the final video segments based on the cardio-pulmonary resuscitation operation flow, generating a segment catalog according to a video time axis and obtaining a segment result;
and the archiving unit is used for compressing and storing the original video and the segmentation result.
In this embodiment, the final video segmentation refers to a short video obtained by segmenting an original video according to correct segmentation nodes.
In the embodiment, the segment catalog determines the catalog of the time corresponding to each operation according to the operation process of the demonstrator and the detection of the video time axis, so that a teacher or a student can conveniently and quickly find the video segments to be searched.
The beneficial effects of the above technical scheme are as follows: according to the invention, the correct segmentation node determined according to the detection result corresponds to the time node, the original video is segmented based on the time node, and the video segmentation is obtained, so that the automatic segmentation of the teacher explanation video and the student practice video is realized, the learning of the teaching video of students is facilitated, and the evaluation of the teacher on the cardio-pulmonary resuscitation of the students is also improved; based on the cardiopulmonary resuscitation operation flow, adding operation names to each small video in the video segments, generating a segment catalog according to a video time axis, and obtaining a segment result, so that a teacher or a student can quickly find the video segment to be searched, and the query time is saved; and the original video and the segmentation result are compressed and stored to obtain the archived content, so that the segmented video can be ensured to be repeatedly watched by teachers or students.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A cardiopulmonary resuscitation, AI, automatic segmentation system, comprising:
the motion recognition module is used for determining the ending gesture of each operation in the CPR original video, and determining the ending frame of each operation based on the ending gesture to obtain a first segmentation node;
the semantic detection module is used for extracting video semantics of an original video to obtain audio features, and detecting the first segmentation node based on the audio features to obtain a detection result;
and the video segmentation module is used for segmenting the original video and automatically archiving video segmentation information according to the detection result.
2. The cardiopulmonary resuscitation AI auto-segmentation system of claim 1, wherein the action recognition module comprises:
the video processing unit is used for performing framing processing on an original video to obtain a plurality of video frames, and performing key highlighting processing on each video frame to obtain a video frame to be processed;
the action marking unit is used for marking key points of each gesture on the video frame to be processed to obtain a marked video frame and adding operation names to the marked video frame based on a preset cardio-pulmonary resuscitation operation name;
and the action segmentation unit is used for establishing an action decomposition set based on the operation name and determining a first segmentation node corresponding to the original video image according to the division result of the action decomposition set.
3. The cardiopulmonary resuscitation AI auto-segmentation system of claim 2, wherein the video processing unit comprises:
the feature acquisition subunit is used for acquiring a plurality of video frames, extracting first posture features of a demonstrator on the plurality of video frames respectively, and acquiring second posture features of all postures corresponding to the cardiopulmonary resuscitation operation based on a standard action view library;
the screening subunit is used for screening the plurality of video frames based on the first posture characteristic and the second posture characteristic to obtain effective video frames and invalid video frames, and hiding the images corresponding to the invalid video frames after determining the first time nodes corresponding to the invalid video frames;
and the preprocessing subunit is used for performing key highlighting processing on the effective video frame to obtain a video frame to be processed.
4. The cardiopulmonary resuscitation (AI) automatic segmentation system according to claim 2, wherein the action-marking unit comprises:
the action authentication subunit is used for positioning key points for marking the current posture of the demo staff on the video frame to be processed based on the preset skeleton, and connecting the key points to obtain a first action view;
comparing the first action view with a standard view in a standard action view library to obtain a similar gesture, and when the similar gesture is unique, judging that the current gesture is the similar gesture;
otherwise, acquiring a second action view corresponding to the adjacent video frame to be processed of the video frame to be processed, comparing the first action view with the second action view, determining the position change characteristic of the current posture, screening similar postures based on the position change characteristic to obtain a target posture, and judging that the current posture is the target posture.
5. The cardiopulmonary resuscitation AI auto-segmentation system of claim 4, wherein the action marking unit further comprises:
the end frame selecting subunit is used for acquiring an end gesture of each complete action in the cardiopulmonary resuscitation and a third action view corresponding to the end gesture based on the standard action view library, and determining an end frame of each segmental action of the cardiopulmonary resuscitation according to the third action view;
and the result verification subunit is used for acquiring the next marked video frame corresponding to each end frame, performing action continuous demonstration on each end frame and the next marked video frame, and determining the correlation between the first operation corresponding to the current end frame and the second operation corresponding to the next marked video frame according to the demonstration result.
6. The cardiopulmonary resuscitation (AI) automatic segmentation system according to claim 5, wherein the result verification subunit further comprises:
a verification judgment subunit, configured to judge that the first operation is not related to the second operation when the demonstration action is discontinuous;
otherwise, acquiring a target end frame corresponding to the next marked video frame, judging that the first operation and the second operation are the same action continuous operation when the action views of the target end frame and the current end frame are the same, and adding a sequence label to the operation name corresponding to the first operation and the second operation;
and when the action views corresponding to the target ending frame and the current ending frame are different, judging that the first operation and the second operation are irrelevant.
7. The automatic segmentation system for cardio-pulmonary resuscitation (AI) of claim 1, wherein the semantic detection module comprises:
the semantic recognition unit is used for performing semantic recognition on the audio of the original video based on the video semantic understanding model and recognizing semantic keywords according to the name of a preset cardio-pulmonary resuscitation action;
the semantic segmentation unit is used for really segmenting a second node of the original video and audio based on the semantic keywords;
the checking unit is used for comparing the second segmentation node with the first segmentation node and judging the first segmentation node as a correct segmentation node when the second segmentation node is consistent with the first segmentation node;
otherwise, the first segmentation node is a to-be-processed segmentation node.
8. The cardiopulmonary resuscitation (AI) automatic segmentation system according to claim 7, characterized in that the verification unit comprises:
the first processing subunit is used for acquiring video semantics corresponding to the segmentation nodes to be processed, judging whether the operation corresponding to the segmentation nodes to be processed is repeated for multiple times or not based on the video semantics, and if so, judging the segmentation nodes to be processed to be correct segmentation nodes;
otherwise, acquiring a first video segment corresponding to the segmentation node to be processed and a second video segment corresponding to a second segmentation node corresponding to the segmentation node to be processed;
and when the time of the first video segment is within the time segment corresponding to the second video segment, taking the second segmentation node as a correct segmentation point.
9. The cardiopulmonary resuscitation (AI) automatic segmentation system according to claim 8, wherein the verification unit further comprises:
the second processing subunit is used for acquiring the missing video segment when the time of the first video segment is not within the time interval corresponding to the second video segment, acquiring the starting time corresponding to the second video segment and the starting time corresponding to the first video segment to calculate a time error value when the video frames corresponding to the missing video segment are all invalid videos, correcting the time corresponding to the first segmentation node based on the time error value, and determining the time corresponding to the correct segmentation node;
when an effective video exists in a video frame corresponding to a missing video segment, the first segmentation node is judged to be a correct segmentation node and the original video has explanation lag, and after a lag identifier is added, the explanation content of the video segment corresponding to the second segmentation node is obtained, is subjected to speech rate adjustment, and is matched with the first video segment.
10. The cardiopulmonary resuscitation (AI) automatic segmentation system according to claim 1, wherein the video segmentation module comprises:
the segmentation unit is used for determining a time node corresponding to the correct segmentation node according to the detection result, and performing segmentation processing on the original video based on the time node to obtain a final video segment;
the tag adding unit is used for adding operation names to the final video segments based on the cardio-pulmonary resuscitation operation flow, generating a segment directory according to the video time axis and obtaining segment results;
and the archiving unit is used for compressing and storing the original video and the segmentation result.
CN202211475077.5A 2022-11-23 2022-11-23 Automatic segmentation system of cardiopulmonary resuscitation AI Active CN115798040B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211475077.5A CN115798040B (en) 2022-11-23 2022-11-23 Automatic segmentation system of cardiopulmonary resuscitation AI

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211475077.5A CN115798040B (en) 2022-11-23 2022-11-23 Automatic segmentation system of cardiopulmonary resuscitation AI

Publications (2)

Publication Number Publication Date
CN115798040A true CN115798040A (en) 2023-03-14
CN115798040B CN115798040B (en) 2023-06-23

Family

ID=85440541

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211475077.5A Active CN115798040B (en) 2022-11-23 2022-11-23 Automatic segmentation system of cardiopulmonary resuscitation AI

Country Status (1)

Country Link
CN (1) CN115798040B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118352033A (en) * 2024-06-18 2024-07-16 南京华夏纪智能科技有限公司 Intelligent segmentation, registration and assessment method and system for cardiopulmonary resuscitation actions

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150365725A1 (en) * 2014-06-11 2015-12-17 Rawllin International Inc. Extract partition segments of personalized video channel
CN113515998A (en) * 2020-12-28 2021-10-19 腾讯科技(深圳)有限公司 Video data processing method and device and readable storage medium
CN113591529A (en) * 2021-02-23 2021-11-02 腾讯科技(深圳)有限公司 Action segmentation model processing method and device, computer equipment and storage medium
CN114495254A (en) * 2020-11-13 2022-05-13 华为云计算技术有限公司 Action comparison method, system, equipment and medium
CN115278298A (en) * 2022-07-20 2022-11-01 北京卡拉卡尔科技股份有限公司 Automatic video segmentation method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150365725A1 (en) * 2014-06-11 2015-12-17 Rawllin International Inc. Extract partition segments of personalized video channel
CN114495254A (en) * 2020-11-13 2022-05-13 华为云计算技术有限公司 Action comparison method, system, equipment and medium
CN113515998A (en) * 2020-12-28 2021-10-19 腾讯科技(深圳)有限公司 Video data processing method and device and readable storage medium
CN113591529A (en) * 2021-02-23 2021-11-02 腾讯科技(深圳)有限公司 Action segmentation model processing method and device, computer equipment and storage medium
CN115278298A (en) * 2022-07-20 2022-11-01 北京卡拉卡尔科技股份有限公司 Automatic video segmentation method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118352033A (en) * 2024-06-18 2024-07-16 南京华夏纪智能科技有限公司 Intelligent segmentation, registration and assessment method and system for cardiopulmonary resuscitation actions

Also Published As

Publication number Publication date
CN115798040B (en) 2023-06-23

Similar Documents

Publication Publication Date Title
CN112233516A (en) Grading method and system for physician CPR examination training and examination
US20050255434A1 (en) Interactive virtual characters for training including medical diagnosis training
CN109101879B (en) Posture interaction system for VR virtual classroom teaching and implementation method
CN111539245B (en) CPR (CPR) technology training evaluation method based on virtual environment
CN112749684A (en) Cardiopulmonary resuscitation training and evaluating method, device, equipment and storage medium
CN112233515A (en) Unmanned examination and intelligent scoring method applied to physician CPR examination
CN112581817B (en) Traditional Chinese medicine teacher medical education auxiliary system and use method
CN115798040B (en) Automatic segmentation system of cardiopulmonary resuscitation AI
CN115393957A (en) First-aid training and checking system and method
CN113658584A (en) Intelligent pronunciation correction method and system
CN111444879A (en) Joint strain autonomous rehabilitation action recognition method and system
CN115083628B (en) Medical education cooperative system based on traditional Chinese medicine inspection objectivity
CN202257989U (en) Analog simulation system for cardiopulmonary resuscitation skill training
CN111540380B (en) Clinical training system and method
KR20220013347A (en) System for managing and evaluating physical education based on artificial intelligence based user motion recognition
CN113409624A (en) Cardio-pulmonary resuscitation training system based on AR augmented reality technology
Ray et al. Design and implementation of technology enabled affective learning using fusion of bio-physical and facial expression
Boonbrahm et al. Interactive marker-based augmented reality for CPR training
CN115188074A (en) Interactive physical training evaluation method, device and system and computer equipment
CN115457821A (en) Cardio-pulmonary resuscitation examination training equipment and using method
CN115227234A (en) Cardiopulmonary resuscitation pressing action evaluation method and system based on camera
CN114970701A (en) Multi-mode fusion-based classroom interaction analysis method and system
CN111785254B (en) Self-service BLS training and checking system based on anthropomorphic dummy
CN111768758B (en) Self-service basic life support assessment system based on video interpretation technology
US20230237677A1 (en) Cpr posture evaluation model and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant