WO2023123214A1 - Dispositif électronique, procédé de mesure de profondeur de compression de main, système, et dispositif à porter sur soi - Google Patents

Dispositif électronique, procédé de mesure de profondeur de compression de main, système, et dispositif à porter sur soi Download PDF

Info

Publication number
WO2023123214A1
WO2023123214A1 PCT/CN2021/143106 CN2021143106W WO2023123214A1 WO 2023123214 A1 WO2023123214 A1 WO 2023123214A1 CN 2021143106 W CN2021143106 W CN 2021143106W WO 2023123214 A1 WO2023123214 A1 WO 2023123214A1
Authority
WO
WIPO (PCT)
Prior art keywords
wearable device
image
square area
color
video
Prior art date
Application number
PCT/CN2021/143106
Other languages
English (en)
Chinese (zh)
Inventor
焦旭
Original Assignee
焦旭
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 焦旭 filed Critical 焦旭
Priority to PCT/CN2021/143106 priority Critical patent/WO2023123214A1/fr
Priority to CN202180005744.0A priority patent/CN114556446A/zh
Publication of WO2023123214A1 publication Critical patent/WO2023123214A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Definitions

  • the present application belongs to the technical field of image processing, and in particular, relates to an electronic device, a hand pressing depth detection method, a system, and a wearable device
  • the accuracy of the current hand pressing depth detection method needs to be further improved.
  • the present application proposes an electronic device, a hand pressing depth detection method, a system, and a wearable device.
  • an electronic device which is used to detect the depth of the user's pressing action in a scene where the user provides cardiopulmonary resuscitation for the patient, and the electronic device includes at least a processor;
  • the processor is configured to perform the following steps:
  • the video includes multiple images of video frame sequences generated in time sequence; wherein each of the images includes a wearable device worn near the pressing part when the user presses the patient, The wearable device is provided with a positioning mark image of the wearable device as a tracking frame;
  • identifying a tracking frame in a wearable device in the image according to at least one frame image in the sequence of video frames specifically includes:
  • the tracking frame is recognized according to the preset shape and color of the background image of the wearable device, and according to the preset color and shape of the partial image in the background image.
  • identifying a tracking frame in a wearable device in the image according to at least one frame image in the sequence of video frames specifically includes:
  • the extracted color features are consistent with preset color features
  • the preset color features include the preset The color characteristics of the background image and the preset color characteristics of the partial image
  • the classifier is pre-trained, and the pre-set shape and color of the background image and the shape and color of the partial image are used as input when the classifier is trained.
  • the preset background image is a red rectangular area or a green rectangular area; the preset partial image includes multiple white rectangular areas in the red rectangular area or the green rectangular area and a black rectangular area;
  • One of the plurality of white rectangular areas is located at the center of the red rectangular area or the green rectangular area, and the plurality of white rectangular areas are at least partially located on at least two opposite sides of the red rectangular area or the green rectangular area, so The position where the white square area is located intersects with the median line of the rectangle.
  • the image acquisition device is a camera on the electronic device, and the user wearing the wearable device is photographed by the camera;
  • the wearable device is a wristband, the wristband includes a wristband, and the wristband is provided with a hard square area, the background color of the hard square area is red or green, and the hard square area A white square area is provided in the center, and/or white square areas are respectively provided on at least two opposite sides of the hard square area, and the position of the white square area intersects the median line of the rectangle.
  • tracking the ups and downs of the position of the tracking frame in the pressing direction in the plurality of images in the sequence of video frames specifically includes:
  • a KCF algorithm is used to track the tracking frame, and the KCF algorithm at least fuses the histogram feature of the directional gradient, the color domain and the score of the classifier.
  • the depth of the user's pressing action is determined according to the fluctuation of the position, and the depth of the pressing action is output to the display of the electronic device and/or the display of the wearable device, or on the display Or output voice prompt information on the wearable device.
  • a wearable device is provided.
  • the wearable device is used in the above-mentioned electronic device scene, and is used for the user to provide cardiopulmonary resuscitation for the patient.
  • the wearable device is worn near the user's compression site, so that the camera captures the user providing cardiopulmonary resuscitation for the patient.
  • the recovered video image is provided to the electronic device, and the electronic device identifies the wearable device in the video image and tracks the tracking frame on the wearable device;
  • the wearable device is provided with a hard square area, the background color of the hard square area is red or green, and the hard square area is provided with a white square area.
  • the hard square area is provided with a white square area, which specifically includes:
  • the center of the hard square area is provided with a white square area;
  • a white square area is provided on at least two of the four sides of the hard square area; and/or
  • Black strip-shaped areas are arranged on two median lines of the rigid square area, and the black strip-shaped areas extend from the center of the rigid square area to one or more sides of the four sides.
  • white lines are arranged around the red or green background image of the hard square area, and the white line is the outline of the red or green background image of the hard square area.
  • the tracking frame includes one of the outline of the white square area, the outline of the black bar area, the outline of the red or green background image, or any two or more combination.
  • the wearable device is a wristband worn on the wrist, the wristband includes a wristband, and the hard square area is provided on the wristband.
  • the background colors of the plurality of hard square regions are different and do not constitute central symmetry or axis symmetry.
  • the embodiment of the present application provides a hand pressing depth detection method, including:
  • the video includes multiple images of video frame sequences generated in time sequence; wherein each of the images includes a wearable device worn near the pressing part when the user presses the patient, The wearable device is provided with a positioning mark image of the wearable device as a tracking frame;
  • identifying a tracking frame in a wearable device in the image according to at least one frame image in the sequence of video frames specifically includes:
  • the tracking frame is recognized according to the preset shape and color of the background image of the wearable device, and according to the preset color and shape of the partial image in the background image.
  • identifying a tracking frame in a wearable device in the image according to at least one frame image in the sequence of video frames specifically includes:
  • the extracted color features are consistent with preset color features
  • the preset color features include the preset The color characteristics of the background image and the preset color characteristics of the partial image
  • the classifier is pre-trained, and the pre-set shape and color of the background image and the shape and color of the partial image are used as input when the classifier is trained.
  • the preset background image is a red rectangular area or a green rectangular area; the preset partial image includes multiple white rectangular areas in the red rectangular area or the green rectangular area and a black rectangular area;
  • One of the plurality of white rectangular areas is located at the center of the red rectangular area or the green rectangular area, and the plurality of white rectangular areas are at least partially located on at least two opposite sides of the red rectangular area or the green rectangular area, so The position where the white square area is located intersects with the median line of the rectangle.
  • the image acquisition device is a camera on the electronic device, and the user wearing the wearable device is photographed by the camera;
  • the wearable device is a wristband, the wristband includes a wristband, and the wristband is provided with a hard square area, the background color of the hard square area is red or green, and the hard square area A white square area is provided in the center, and/or white square areas are respectively provided on at least two opposite sides of the hard square area, and the position of the white square area intersects the median line of the rectangle.
  • tracking the ups and downs of the position of the tracking frame in the pressing direction in the plurality of images in the sequence of video frames specifically includes:
  • a KCF algorithm is used to track the tracking frame, and the KCF algorithm at least fuses the histogram feature of the directional gradient, the color domain and the score of the classifier.
  • An embodiment of the present application provides a hand pressing depth detection system, including an electronic device and a wearable device;
  • the electronic equipment includes:
  • processors one or more processors
  • a storage device on which one or more programs are stored, and when the one or more programs are executed by the one or more processors, the one or more processors implement the described method.
  • An embodiment of the present application provides a non-transitory computer-readable storage medium on which a computer program is stored, wherein, when the computer program is executed by a processor, the method described in any one of the above-mentioned embodiments is implemented.
  • the electronic device provided by the present application is used to detect the depth of the user's pressing action in the scene where the user provides cardiopulmonary resuscitation for the patient.
  • the electronic device includes at least a processor; the processor is configured to acquire the video provided by the image acquisition device, and the video Contains a plurality of images of video frame sequences generated in time order; wherein, each of the images contains a wearable device worn near the pressing part when the user presses the patient, and the wearable device is set on the wearable device
  • the positioning mark image of the video frame sequence is used as a tracking frame; and the tracking frame in the wearable device in the image is identified according to at least one frame image in the video frame sequence; and the tracking frame in a plurality of images in the video frame sequence is tracked
  • Fig. 1 is a schematic diagram of the system architecture of the hand pressing depth detection method and detection device operation in some examples of the present application;
  • FIG. 2 is a schematic diagram of a video shot in a hand pressing depth detection method in some embodiments of the present application
  • Fig. 3 is a schematic diagram of the structure of the bracelet in some embodiments of the present application.
  • Fig. 4 is a schematic structural diagram of a wristband in some embodiments of the present application.
  • Fig. 5 is a schematic structural diagram of a wristband in some embodiments of the present application.
  • Fig. 6 is a schematic flow chart of a hand pressing depth detection method in some embodiments of the present application.
  • Fig. 7 is a schematic diagram of the detection result of the tracking coordinate position in the hand pressing depth detection method in some embodiments of the present application.
  • Fig. 8 is a schematic diagram of the detection result of the tracking coordinate position in the hand pressing depth detection method in some embodiments of the present application.
  • FIG. 9 is a schematic diagram of a tracking frame detected by a hand pressing depth detection method in some embodiments of the present application.
  • Fig. 10 is a schematic flow diagram of the implementation of identifying the tracking frame in the wearable device in the image according to at least one frame image in the video frame sequence in some embodiments of the present application;
  • Fig. 11 is a schematic structural diagram of a computer system suitable for realizing the control device of the embodiment of the present application. .
  • Fig. 1 shows an exemplary system architecture 100 that can be applied to embodiments of the present application, such as a hand pressing depth detection system, a hand pressing depth detection method, a hand pressing depth detection device, an electronic device, and a wearable device.
  • a system architecture 100 may include a terminal device 101 , a terminal device 102 , a terminal device 103 , a network 104 and a server 105 .
  • the network 104 is used as a medium for providing communication links between the terminal device 101 , the terminal device 102 , the terminal device 103 and the server 105 .
  • Network 104 may include various connection types, such as wires, wireless communication links, or fiber optic cables, among others.
  • a user may use one or more of the terminal device 101 , the terminal device 102 , and the terminal device 103 to interact with the server 105 through the network 104 to receive or send data (such as video) and the like.
  • Various communication client applications can be installed on the terminal device 101, the terminal device 102, and the terminal device 103, such as video playback software, video processing applications, web browser applications, shopping applications, search applications, instant messaging tools, mailboxes, etc. Client, social platform software, etc.
  • the terminal device 101, the terminal device 102, and the terminal device 103 can be hardware, such as various electronic devices that have a display screen and support data transmission, including but not limited to smart phones, tablet computers, laptop computers, desktop computers, Smart wearable devices and more.
  • the smart wearable device may be smart glasses, smart bracelets, smart helmets, and the like.
  • terminal device 101, the terminal device 102, and the terminal device 103 are software, they can be installed in the electronic devices listed above. It can be implemented as multiple software or software modules (such as software or software modules for providing distributed services), or as a single software or software module. No specific limitation is made here.
  • the server 105 may be a server that provides various services, for example, a background server that provides support for videos displayed on the terminal device 101 , the terminal device 102 , and the terminal device 103 .
  • the background server can analyze and process the received data such as slicing requests, and feed back the processing results (such as indexed slices or slicing sequences) to electronic devices (such as terminal devices) connected in communication with it.
  • the hand pressing depth detection method provided by the embodiment of the present application can be executed by a processor, and correspondingly, the computer program of the hand pressing depth detection method can be stored in a non-volatile computer-readable storage medium, The instructions of the computer-readable storage medium can be obtained and executed by a processor.
  • the processor and memory can be placed in a terminal device, such as a mobile phone or a computer or a wearable device, or the processor and memory can be placed in a server.
  • the processor and the memory may be one or more, and if there are more than one, part of the processor or part of the memory may be used for the server, and part of the memory may be used for the terminal device.
  • the server may be hardware or software.
  • the server can be implemented as a distributed server cluster composed of multiple servers, or as a single server.
  • the server is software, it can be implemented as multiple software or software modules (such as software or software modules for providing distributed services), or as a single software or software module. No specific limitation is made here.
  • the numbers of terminal devices, networks and servers in Fig. 1 are only illustrative. According to the implementation needs, there can be any number of terminal devices, networks and servers.
  • the system architecture may only include the electronic devices on which the hand pressing depth detection method runs (such as terminal devices 101, 102, 103 or server 105).
  • the hand compression depth detection method of the present application is used to detect the depth of the user's compression action in the scene where the user provides cardiopulmonary resuscitation for the patient.
  • the compression depth is realized by ranging, and the accuracy of ranging accuracy affects the accuracy of compression depth.
  • the currently tried solution is to realize compression depth detection through image distance measurement. For example, add a calibration object (for example, with obvious color characteristics and known length and width, etc.) in the pressing scene, and use this calibration object to calibrate the camera and correct the image, so as to obtain the distance accuracy of the image and the measurement distance of the target.
  • a calibration object for example, with obvious color characteristics and known length and width, etc.
  • the distance measurement scheme of the embodiment of the present application is: combining the traditional scale measurement idea, through digital image processing, extracting the wearable device (such as a wristband) when pressing, and using the algorithm to assist the calculation to complete the guarantee through the vertical distance that the wearable device moves. Ranging with speed and precision.
  • the system includes: electronic devices (such as mobile phone 01), wearable devices (such as bracelet 02);
  • the mobile phone 01 has a shooting function or can shoot a video in real time to obtain video data, and the video content includes the action of the user providing cardiopulmonary resuscitation compressions for the patient.
  • the mobile phone is also used for processing video data, and identifying the user's pressing action and pressing depth in the video data.
  • Identifying the pressing depth can identify the ups and downs of the pressing hand used by the user to determine the pressing depth.
  • the detection may be reckless, and a calibration object can also be worn on the pressing hand, and by tracking the ups and downs of the calibration object with the hand pressing, determine Depth of hand compressions.
  • the bracelet 02 worn on the user's wrist is a calibration object worn near the pressing action during the detection of the pressing depth, which is convenient for the mobile phone to quickly identify the calibration object in the picture and track the calibration object.
  • the variation fluctuation determines the compression depth, so as to improve the accuracy of compression depth detection.
  • the calibration object is a reference object when detecting the fluctuation position of the pressing action, and the calibration object can be the whole wristband, or a combination of one or more features in the pattern/color of the local position in the wristband.
  • the embodiment of the present application provides a wearable device as a calibration object.
  • a wearable device which is used in the scene where the user provides cardiopulmonary resuscitation for the patient and is worn near the user's compression site, so that the camera captures the video image of the user providing cardiopulmonary resuscitation for the patient while shooting the wearable device, and provides the video to the electronic device,
  • the electronic device identifies the wearable device in the video image and tracks the tracking frame on the wearable device, so that the electronic device determines the compression depth according to the up and down position changes of the tracking frame.
  • the wearable device In order to prevent the wearable device from deforming easily when it is soft, which will increase the difficulty of detection or reduce the accuracy of detecting the pressing depth, the wearable device is provided with a hard square area, and the background color of the hard square area can be white or red. , yellow or green etc.
  • a wearable device is provided as a wristband, and the wristband is configured to be worn on a user's wrist.
  • the wristband includes a wristband 20, and the wristband 20 is provided with a hard square area 21, and the background color of the hard square area 21 is a red or green area as shown in FIG. 4 or FIG. 5 .
  • the hard square area is provided with a white square area, which specifically includes: the center of the hard square area is provided with a white square area; and/or at least two of the four sides of the hard square area are White square areas are arranged on the opposite sides; and/or black strip-shaped areas are arranged on the two median lines of the hard square area, and the black bar-shaped areas extend from the center of the hard square area to on one or more of the four sides.
  • the center of the hard square area 21 is provided with a white square area 22 , and/or at least two opposite sides of the hard square area 21 are respectively provided with white A square area 23, where the white square area 23 intersects with the median line of the rectangle.
  • Two median lines of the hard square area 21 are provided with black strip-shaped areas 24, and the black strip-shaped areas 24 extend from the center of the hard square area to four sides.
  • white lines are arranged around the red or green background image of the hard square area, and the white lines are the outline of the red or green background image of the hard square area.
  • white lines 25 are arranged around the red or green background image of the hard square area 21 .
  • the white line is the outline of the red or green background image of the hard square area 21 .
  • the above-mentioned electronic device identifies the wearable device in the video image and tracks the tracking frame on the wearable device, so that the electronic device determines the compression depth according to the up and down position changes of the tracking frame, wherein the tracking frame includes the white One of the outline of the square area, the outline of the black strip area, the outline of the red or green background image, or any combination of two or more.
  • the background colors of the plurality of hard square regions 21 are different and do not constitute central symmetry or axis symmetry.
  • the top two are green, the bottom left is red, and the bottom right is blue.
  • This asymmetric color combination can avoid the problem of inaccurate recognition caused by symmetry in some positioning algorithms (such as the feature point positioning scene of the harr i s algorithm), thereby further improving the recognition effect.
  • the hand pressing depth detection method provided in the embodiment of the present application includes the steps shown in Figure 6:
  • the processor acquires the video provided by the image acquisition device, the video includes a plurality of images of a sequence of video frames generated in time order;
  • the processor may be a processor on an electronic device, for example, a processor on a mobile terminal, or a processor on a server, or a processor on a wearable device.
  • the image acquisition device may be a camera on the electronic device or a camera independent of the electronic device.
  • the video is a video taken by a camera received by the processor in real time, or an offline video.
  • each of the images includes a wearable device worn near the pressing part when the user presses the patient, and an image of a positioning mark of the wearable device is set on the wearable device as a tracking frame.
  • the processor is a processor on a mobile phone
  • the image acquisition device is a front camera or a rear camera on the mobile phone.
  • the user provides cardiopulmonary resuscitation for the patient in real time
  • the camera on the mobile phone captures the picture of pressing the patient in real time
  • the user wears the above-mentioned wearable device, such as a bracelet, when pressing the patient.
  • the picture is provided to the processor of the mobile phone for processing, and the processor obtains multiple images according to the acquired video.
  • S2 The processor recognizes a tracking frame in the wearable device in the image according to at least one frame image in the video frame sequence;
  • the positioning marker image that can be used as the tracking frame in step S1 is pre-set in the detection method.
  • Step S2 some implementations are: the tracking frame can be recognized at least according to the preset shape and color of the background image of the wearable device, and according to the preset color and shape of the partial image in the background image.
  • the processor determines the tracking frame according to the received video of the period.
  • Described image 1, image 2, image 3, ..., image 30 are the actual image according to time order, also can be sampling value, for example, every interval 5 actual pictures sample an image as image 1, image 2, image 3, ..., image 30.
  • the processor tracks the ups and downs of the position of the tracking frame in the pressing direction in the plurality of images in the sequence of video frames;
  • the tracking frame is tracked using a KCF algorithm, and the KCF algorithm at least fuses the histogram feature of oriented gradients, the color domain, and the score of the classifier.
  • the tracking frame is then tracked in subsequent video images, and the coordinates of the highest point and the lowest point of the tracking frame in each image are determined to obtain a series of coordinate positions, as shown in Figure 7 Show.
  • the abscissa is a plurality of images of the sequence of video frames generated in time order, and the ordinate is the coordinates of the highest point and the lowest point of the motion track of the tracking frame.
  • the peak pressing values of the wristband are selected in the 1-100 and 500-600 frame intervals for measurement, and the results are shown in the figure As shown in 8, it is the peak and valley information of the motion height.
  • X and Y in accompanying drawing 8 are the abscissa of data, and Y is the ordinate of numerical value.
  • the processor determines the depth of the user's pressing action according to the fluctuation of the position and outputs the depth of the pressing action to the electronic device and/or the wearable device;
  • the depth of the user's pressing action is determined according to the fluctuation of the position, and the depth of the pressing action is output to the display of the electronic device and/or the display of the wearable device, or on the display or Voice prompt information is output on the wearable device.
  • the pressing depth is output to at least one or a combination of the following, such as output to the display screen of the mobile phone, output in the form of voice broadcast, output to the user's wearable device or other devices, such as smart glasses or smart watches worn by the user , and other terminal devices for monitoring or reference by others, such as telemedicine doctors, or family members of patients, etc.
  • the tracking frame in the wearable device in the image is identified according to at least one frame image in the video frame sequence.
  • the tracking frame of the wristband is obtained, and the tracking frame includes a hard square area and a part of the wristband.
  • the trajectory map is given for the extreme points, and according to the number of frames corresponding to the trajectory, the corresponding calibration object (such as a wristband) is processed to obtain the corresponding calibration object (such as a wristband), and the pixel width of the wristband appearing in the image is calculated.
  • the ratio of the actual length of the ring to the pixel width in the image, the width of the bracelet can be effectively extracted through the canny edge and the circumscribed border.
  • step 2 identify the tracking frame in the wearable device in the image according to at least one frame image in the video frame sequence, specifically including the steps shown in Figure 10:
  • the processor converts the original image in the at least one acquired video to the HSV color space, and extracts the color features in the image, the extracted color features are consistent with the preset color features, and the preset The color features include the preset color features of the background image and the preset color features of the partial image;
  • the image when the bracelet shown in FIG. 4 or 5 is worn by the user is acquired, and the green/red color in the image is extracted.
  • the green/red color in the image is extracted.
  • the preset background color is rectangular red or rectangular green
  • the preset color feature of the partial image is at least one of black and white.
  • the processor filters the color according to the shape and color of the preset background image and the shape and color of the partial image, and extracts a directional gradient histogram features and gray level co-occurrence matrix features;
  • the preset background image is a red rectangular area or a green rectangular area; the preset partial image includes multiple white rectangular areas in the red rectangular area or the green rectangular area and a black rectangular area;
  • One of the plurality of white rectangular areas is located at the center of the red rectangular area or the green rectangular area, and the plurality of white rectangular areas are at least partially located on at least two opposite sides of the red rectangular area or the green rectangular area, so The position where the white square area is located intersects with the median line of the rectangle.
  • the target features are further retained according to the following preset features, and the interference area is filtered.
  • the red or green area is rectangular or square;
  • the red or green area has a white square at its center
  • At least two of the four sides of the red or green rectangular area are provided with white squares on opposite sides; and/or
  • Black strip-shaped areas are arranged on the two median lines of the red or green rectangular area, and the black strip-shaped areas extend from the center of the hard square area to one or more sides of the four sides .
  • the directional gradient histogram feature and the gray level co-occurrence matrix feature are extracted based on the extracted color and graphic features.
  • the Histogram of Oriented Gradient (HOG) feature is a feature descriptor used for object detection in computer vision and image processing.
  • the HOG feature constitutes a feature by calculating and counting the gradient orientation histogram of the local area of the image.
  • the HOG feature is a feature of an image.
  • Gray level co-occurrence matrix is a common method to describe texture by studying the spatial correlation characteristics of gray level.
  • the gray level histogram is the statistical result of a single pixel on the image having a certain gray level, while the gray level co-occurrence matrix is obtained by making statistics on the condition that two pixels with a certain distance on the image have a certain gray level respectively.
  • the processor uses the directional gradient histogram feature and the gray level co-occurrence matrix feature as an input of the classifier to identify the tracking frame in the wearable device;
  • the classifier is pre-trained, and the pre-set shape and color of the background image and the shape and color of the partial image are used as input when the classifier is trained.
  • the method described above in FIG. 6 can be executed by a processor on an electronic device, and the present application provides an electronic device for detecting the depth of a user's pressing action in a scene where a user provides cardiopulmonary resuscitation for a patient.
  • the electronic device includes at least a processing device;
  • the processor is configured to perform the following steps:
  • the video includes multiple images of video frame sequences generated in time sequence; wherein each of the images includes a wearable device worn near the pressing part when the user presses the patient, The wearable device is provided with a positioning mark image of the wearable device as a tracking frame;
  • the method described in FIG. 6 above can be applied to a hand pressing depth detection system, and the system includes electronic equipment and wearable equipment;
  • the electronic equipment includes:
  • processors one or more processors
  • a storage device on which one or more programs are stored, and when the one or more programs are executed by the one or more processors, the one or more processors implement the following steps:
  • the video includes multiple images of video frame sequences generated in time sequence; wherein each of the images includes a wearable device worn near the pressing part when the user presses the patient, The wearable device is provided with a positioning mark image of the wearable device as a tracking frame;
  • the method described above in FIG. 6 may be an instruction stored in a non-transitory computer-readable storage medium, and the above steps are implemented when the instruction is executed, that is, to realize:
  • the video includes multiple images of video frame sequences generated in time sequence; wherein each of the images includes a wearable device worn near the pressing part when the user presses the patient, The wearable device is provided with a positioning mark image of the wearable device as a tracking frame;
  • FIG. 11 shows a schematic structural diagram of a computer system 800 suitable for implementing the control device of the embodiment of the present application.
  • the control device shown in FIG. 11 is only an example, and should not limit the functions and scope of use of this embodiment of the present application.
  • a computer system 800 includes a central processing unit (CPU) 801, which can operate according to a program stored in a read-only memory (ROM) 802 or a program loaded from a storage section 808 into a random access memory (RAM) 803 Instead, various appropriate actions and processes are performed.
  • ROM read-only memory
  • RAM random access memory
  • various programs and data required for the operation of the system 800 are also stored.
  • the CPU 801, ROM 802, and RAM 803 are connected to each other via a bus 804.
  • An input/output (I/O) interface 805 is also connected to the bus 804 .
  • the following components are connected to the I/O interface 805: an input section 806 including a keyboard, a mouse, etc.; an output section 807 including a cathode ray tube (CRT), a liquid crystal display (LCD), etc., and a speaker; a storage section 808 including a hard disk, etc. and a communication section 809 including a network interface card such as a LAN card, a modem, or the like.
  • the communication section 809 performs communication processing via a network such as the Internet.
  • a drive 810 is also connected to the I/O interface 805 as needed.
  • a removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, etc., is mounted on the drive 810 as necessary so that a computer program read therefrom is installed into the storage section 808 as necessary.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a computer-readable medium, where the computer program includes program codes for executing the methods shown in the flowcharts.
  • the computer program may be downloaded and installed from a network via communication portion 809 and/or installed from removable media 811 .
  • the central processing unit (CPU) 801 the above-mentioned functions defined in the method of the present application are performed.
  • the computer-readable medium described in this application may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, in which computer-readable program codes are carried. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out the operations of this application may be written in one or more programming languages, or combinations thereof, including object-oriented programming languages—such as Python, Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present application may be implemented by means of software or by means of hardware.
  • the described units may also be set in a processor, for example, it may be described as: a processor includes an acquiring unit, a dividing unit, a determining unit and a selecting unit. Wherein, the names of these units do not constitute a limitation on the unit itself under certain circumstances, for example, the acquisition unit may also be described as "a unit for acquiring picture book images to be processed".
  • the present application also provides a computer-readable medium.
  • the computer-readable medium may be included in the electronic device described in the above embodiments; it may also exist independently without being assembled into the electronic device. middle.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: reads the target video, and the target video is pressed by the fixed shooting device against the hand Actions are collected to obtain the video, and the calibration object is worn on the hand; the calibration object is detected on the target video, and the tracking frame of the calibration object is obtained; the tracking frame is tracked, and the calibration object is obtained The highest point and the lowest point in the pressing direction; according to the coordinates of the highest point and the lowest point, the pressing depth of the hand is obtained.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

Dispositif électronique, comprenant un processeur et exécutant les étapes suivantes consistant : à acquérir une vidéo fournie par un dispositif d'acquisition d'image, la vidéo comprenant de multiples images d'une séquence de trames vidéo générée en fonction d'une séquence temporelle, chaque image comprenant un dispositif à porter sur soi porté à proximité d'une partie compressée lorsqu'un utilisateur effectue une compression sur un patient, et une image de repère de positionnement du dispositif à porter sur soi étant disposée sur le dispositif à porter sur soi pour servir de boîte de suivi ; et à identifier, en fonction d'au moins une image dans la séquence de trames vidéo, la boîte de suivi sur le dispositif à porter sur soi dans l'image ; à suivre le changement de montée et de descente de la position de la boîte de suivi dans une direction de compression dans les multiples images dans la séquence de trames vidéo ; et à déterminer une profondeur d'une action de compression de l'utilisateur en fonction du changement de montée et de descente de la position, et à émettre en sortie la profondeur de l'action de compression vers le dispositif électronique et/ou le dispositif à porter sur soi. Le dispositif électronique améliore la précision de la profondeur de compression.
PCT/CN2021/143106 2021-12-30 2021-12-30 Dispositif électronique, procédé de mesure de profondeur de compression de main, système, et dispositif à porter sur soi WO2023123214A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2021/143106 WO2023123214A1 (fr) 2021-12-30 2021-12-30 Dispositif électronique, procédé de mesure de profondeur de compression de main, système, et dispositif à porter sur soi
CN202180005744.0A CN114556446A (zh) 2021-12-30 2021-12-30 电子设备、手部按压深度检测方法、系统以及穿戴设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/143106 WO2023123214A1 (fr) 2021-12-30 2021-12-30 Dispositif électronique, procédé de mesure de profondeur de compression de main, système, et dispositif à porter sur soi

Publications (1)

Publication Number Publication Date
WO2023123214A1 true WO2023123214A1 (fr) 2023-07-06

Family

ID=81669939

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/143106 WO2023123214A1 (fr) 2021-12-30 2021-12-30 Dispositif électronique, procédé de mesure de profondeur de compression de main, système, et dispositif à porter sur soi

Country Status (2)

Country Link
CN (1) CN114556446A (fr)
WO (1) WO2023123214A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190374429A1 (en) * 2017-02-28 2019-12-12 Zoll Medical Corporation Force Sensing Implementations in Cardiopulmonary Resuscitation
CN111783702A (zh) * 2020-07-20 2020-10-16 杭州叙简科技股份有限公司 一种基于图像增强算法和人体关键点定位的高效行人摔倒检测方法
CN112292688A (zh) * 2020-06-02 2021-01-29 焦旭 运动检测方法和装置、电子设备以及计算机可读存储介质
CN113223389A (zh) * 2021-05-18 2021-08-06 北京大学 一种基于ar技术的心肺复苏自助培训考核系统
CN113592788A (zh) * 2021-07-14 2021-11-02 河南金芯数联电子科技有限公司 基于机器视觉的cpr按压深度测量方法及其系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190374429A1 (en) * 2017-02-28 2019-12-12 Zoll Medical Corporation Force Sensing Implementations in Cardiopulmonary Resuscitation
CN112292688A (zh) * 2020-06-02 2021-01-29 焦旭 运动检测方法和装置、电子设备以及计算机可读存储介质
CN111783702A (zh) * 2020-07-20 2020-10-16 杭州叙简科技股份有限公司 一种基于图像增强算法和人体关键点定位的高效行人摔倒检测方法
CN113223389A (zh) * 2021-05-18 2021-08-06 北京大学 一种基于ar技术的心肺复苏自助培训考核系统
CN113592788A (zh) * 2021-07-14 2021-11-02 河南金芯数联电子科技有限公司 基于机器视觉的cpr按压深度测量方法及其系统

Also Published As

Publication number Publication date
CN114556446A (zh) 2022-05-27

Similar Documents

Publication Publication Date Title
CN109359592B (zh) 视频帧的处理方法、装置、电子设备及存储介质
WO2018176938A1 (fr) Procédé et dispositif d'extraction de centre de point lumineux infrarouge, et dispositif électronique
Tsouri et al. On the benefits of alternative color spaces for noncontact heart rate measurements using standard red-green-blue cameras
US11600008B2 (en) Human-tracking methods, systems, and storage media
CN109840485B (zh) 一种微表情特征提取方法、装置、设备及可读存储介质
CN109284737A (zh) 一种用于智慧教室的学生行为分析和识别系统
CN110084154B (zh) 渲染图像的方法、装置、电子设备和计算机可读存储介质
CN107368806A (zh) 图像矫正方法、装置、计算机可读存储介质和计算机设备
CN111938622B (zh) 心率检测方法、装置及系统、可读存储介质
TWI778552B (zh) 運動檢測方法和裝置、電子設備以及內儲程式之電腦可讀取記錄媒體
Chen et al. Eliminating physiological information from facial videos
CN105979283A (zh) 视频转码方法和装置
CN113326781B (zh) 基于面部视频的非接触式焦虑识别方法和装置
WO2023123214A1 (fr) Dispositif électronique, procédé de mesure de profondeur de compression de main, système, et dispositif à porter sur soi
CN110279406B (zh) 一种基于摄像头的无接触式的脉率测量方法及装置
CN111708907B (zh) 一种目标人员的查询方法、装置、设备及存储介质
CN111743524A (zh) 一种信息处理方法、终端和计算机可读存储介质
JP2019050553A (ja) 画像処理装置、画像提供装置、それらの制御方法及びプログラム
Ayesha et al. A web application for experimenting and validating remote measurement of vital signs
EP3699865A1 (fr) Dispositif de calcul de forme tridimensionnelle de visage, procédé de calcul de forme tridimensionnelle de visage et support non transitoire lisible par ordinateur
CN110321782A (zh) 一种检测人体特征信号的系统
CN112801997B (zh) 图像增强质量评估方法、装置、电子设备及存储介质
Zheng et al. Hand-over-face occlusion and distance adaptive heart rate detection based on imaging photoplethysmography and pixel distance in online learning
Huang et al. Accurate and efficient pulse measurement from facial videos on smartphones
CN108509852A (zh) 一种监控效果良好的仓库监控系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21969576

Country of ref document: EP

Kind code of ref document: A1