CN109875556B - Intelligent in-production monitoring system - Google Patents

Intelligent in-production monitoring system Download PDF

Info

Publication number
CN109875556B
CN109875556B CN201910162216.0A CN201910162216A CN109875556B CN 109875556 B CN109875556 B CN 109875556B CN 201910162216 A CN201910162216 A CN 201910162216A CN 109875556 B CN109875556 B CN 109875556B
Authority
CN
China
Prior art keywords
layer
model network
production
data
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910162216.0A
Other languages
Chinese (zh)
Other versions
CN109875556A (en
Inventor
宋立国
张金区
罗虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Aitingbei Technology Co ltd
Original Assignee
Guangzhou Aitingbei Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Aitingbei Technology Co ltd filed Critical Guangzhou Aitingbei Technology Co ltd
Priority to CN201910162216.0A priority Critical patent/CN109875556B/en
Publication of CN109875556A publication Critical patent/CN109875556A/en
Application granted granted Critical
Publication of CN109875556B publication Critical patent/CN109875556B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Medical Treatment And Welfare Office Work (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention discloses an intelligent on-line monitoring system, which comprises: the uterine muscle electricity collecting and processing device is in communication connection with the intelligent terminal; the uterine myoelectricity acquisition and processing device is used for acquiring myoelectricity signals and extracting first uterine contraction signal data from the acquired myoelectricity signals; the intelligent terminal comprises: at least one processor and at least one memory, the processor for implementing an intelligent productivity prediction method step, the method step comprising: inputting the first uterine contraction signal data into a temporary production identification model network for prediction, thereby outputting a temporary production state; inputting the first uterine contraction signal data into a temporary production time prediction model network for prediction, and outputting predicted temporary production time. By using the invention, the puerpera can be helped to master the production state of the puerpera in real time, and the safety of the puerpera and the fetus is protected. The intelligent on-line monitoring system can be widely applied to the field of on-line monitoring.

Description

Intelligent in-production monitoring system
Technical Field
The invention relates to the field of on-production monitoring, in particular to an intelligent on-production monitoring system.
Background
The antenatal prediction has great significance for the parturient production and medical care personnel, accurately judges the antenatal characteristics, makes production preparation, and timely enters the delivery room, thereby being an important premise for ensuring the successful production of the parturient. At present, most antenatal judgment is made according to the feelings of the puerperae to the uterine contraction, the first-fetus puerperae has no experience, the possibility of misjudgment is high, the puerperae with experience of two or more births is not sensitive to the uterine contraction reaction, and the like, which influence the antenatal judgment. If with the help of instrument assistance, through artificial intelligence's method prediction parturient's signal, help lying-in woman to master the production state of self in real time, this all has important meaning to the protection lying-in woman and foetus.
Disclosure of Invention
In order to solve the above technical problems, an object of the present invention is to provide an intelligent on-line monitoring system.
The embodiment of the invention provides an intelligent labor monitoring system, which comprises a uterine muscle electricity acquisition and processing device and an intelligent terminal, wherein the uterine muscle electricity acquisition and processing device is in communication connection with the intelligent terminal;
the uterine myoelectricity acquisition and processing device is used for acquiring myoelectricity signals and extracting first uterine contraction signal data from the acquired myoelectricity signals;
the intelligent terminal comprises:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement intelligent productivity prediction method steps comprising:
according to the first uterine contraction signal data, performing on-production state prediction processing and/or on-production time prediction processing;
wherein the real-time status prediction process comprises:
performing first preprocessing on the acquired first uterine contraction signal data to obtain a first to-be-detected data snapshot, and inputting the first to-be-detected data snapshot into a just-in-production identification model network for prediction processing so as to output a just-in-production state;
the real-time forecasting process comprises the following steps:
and performing second preprocessing on the acquired first uterine contraction signal data to obtain a second data snapshot to be detected, inputting the second data snapshot to be detected into the just-in-production time prediction model network for prediction, and outputting the predicted just-in-production time.
Further, the temporary production identification model network is obtained by the following model training steps:
constructing a temporary production identification model network, acquiring second uterine contraction signal data, and inputting the second uterine contraction signal data into the temporary production identification model network for network training;
and/or the temporary production time prediction model network is obtained through the following model training steps:
and constructing a temporary production time prediction model network, acquiring third uterine contraction signal data, and inputting the third uterine contraction signal data into the temporary production time prediction model network for network training.
Further, the step of constructing a temporary production identification model network specifically includes:
constructing a first layer model network, the first layer model network comprising: the output end of the first convolution layer is connected with the input end of the first function excitation layer through the first normalization layer;
constructing a second layer model network, the second layer model network comprising: the output end of the second convolution layer is connected with the input end of the third convolution layer through the second normalization layer, the second function excitation layer and the first random inactivation layer in sequence, and the output end of the first function excitation layer is connected with the input end of the second convolution layer;
constructing a third-layer model network, wherein the third-layer model network comprises a memory unit layer group formed by connecting a plurality of memory unit layers in series, and the output end of the third convolution layer is connected with the input end of the memory unit layer group;
constructing a fourth layer model network, wherein the fourth layer model network comprises: the output end of the second convolution layer is connected with the input end of the fourth function excitation layer through the third normalization layer, the third function excitation layer and the first full-connection layer in sequence, and the output end of the memory unit layer group is connected with the input end of the third normalization layer.
Further, the step of inputting the second uterine contraction signal data to a temporary production identification model network for network training specifically includes:
sequentially carrying out first maximum value downsampling processing and first permutation combination processing on the second uterine contraction signal data to obtain a plurality of first data snapshots;
inputting the first data snapshot and the on-production label into an on-production identification model network for first training processing;
the step of inputting the first data snapshot and the on-line label into the on-line identification model network for the first training process comprises the following steps: inputting the first data snapshot and the temporary production label into a temporary production identification model network for processing to obtain a first characteristic result, judging whether the first characteristic result falls into a first temporary production characteristic interval, if so, the first temporary production characteristic interval is unchanged, otherwise, the first temporary production characteristic interval is expanded, and the first characteristic result falls into the expanded first temporary production characteristic interval;
and repeatedly executing the first training processing step until all the first data snapshots are trained.
Further, the step of constructing the temporary production time prediction model network specifically includes:
constructing a first layer model network, the first layer model network comprising: the output end of the fourth convolution layer is connected with the input end of the fifth function excitation layer through the fourth normalization layer;
constructing a second layer model network, the second layer model network comprising: the output end of the fifth convolution layer is connected with the input end of the sixth convolution layer through the fourth normalization layer, the sixth function excitation layer and the second random inactivation layer in sequence, and the output end of the fifth function excitation layer is connected with the input end of the fifth convolution layer;
constructing a third-level model network, wherein the third-level model network comprises a residual block layer group consisting of a plurality of identical second residual block layer connections, and each second residual block layer comprises: the input end of the fifth normalization layer is connected with the output end of the eighth convolution layer through the seventh function excitation layer, the third random inactivation layer, the seventh convolution layer, the sixth normalization layer, the eighth function excitation layer and the fourth random inactivation layer in sequence, and the output end of the sixth convolution layer is connected with the input end of the residual block layer group;
constructing a fourth layer model network, wherein the fourth layer model network comprises: the input end of the normalization layer is connected with the output end of the tenth function excitation layer through the seventh normalization layer, the ninth function excitation layer and the second full-connection layer in sequence, and the output end of the residual block layer group is connected with the input end of the seventh normalization layer.
Further, the step of inputting the third uterine contraction signal data to a temporary production time prediction model network for network training specifically includes:
sequentially carrying out second maximum value downsampling processing and second permutation and combination processing on the third uterine contraction signal data to obtain a plurality of second data snapshots;
inputting the second data snapshot and the on-production label into the on-production time prediction model network for second training processing;
the step of inputting the second data snapshot and the imminent production label into the imminent production time prediction model network for second training processing comprises: inputting the second data snapshot and the on-line label into an on-line time prediction model network for processing to obtain a second feature result, judging whether the second feature result falls into a second on-line feature interval or not, if so, keeping the second on-line feature interval unchanged, otherwise, expanding the second on-line feature interval, and enabling the second feature result to fall into the expanded second on-line feature interval;
and repeating the second training processing step until all the second data snapshots are trained.
Further, the method steps further comprise the steps of:
acquiring gestational week data and amniotic fluid condition data;
and when the current state is judged to be the in-partum state according to the gestational week data and the amniotic fluid state data, sending out an in-partum early warning signal.
Furthermore, the uterine muscle electricity collecting and processing device is in communication connection with the intelligent terminal in a Bluetooth wireless communication mode.
Further, the intelligent terminal comprises a smart phone, an IPAD, a portable computer and/or a desktop computer.
Furthermore, the uterine myoelectricity collecting and processing device adopts a four-electrode myoelectricity signal collecting device to realize the collection of myoelectricity signals.
The invention has the beneficial effects that: by using the embodiment of the invention, the antenatal state prediction processing and/or the antenatal time prediction processing are carried out by utilizing the obtained uterine contraction signal data only by means of assistance of instruments, the antenatal state and the antenatal time are predicted by an artificial intelligence method, the parturient can be helped to master the production state of the parturient in real time, the safety of the parturient and the fetus is protected, and the antenatal identification model and the antenatal time prediction model are utilized, so the method has the advantages of high precision and high timeliness.
Drawings
FIG. 1 is a schematic diagram of a first embodiment of an intelligent in-process monitoring system according to the present invention;
FIG. 2 is a flowchart illustrating the steps of a first embodiment of the intelligent near-term prediction method of the present invention.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
As shown in fig. 1, an embodiment of the present invention provides an intelligent labor monitoring system, which includes a uterine muscle electricity collecting and processing device and an intelligent terminal, wherein the uterine muscle electricity collecting and processing device is in communication connection with the intelligent terminal;
the uterine myoelectricity acquisition and processing device is used for acquiring myoelectricity signals and extracting first uterine contraction signal data from the acquired myoelectricity signals;
the intelligent terminal comprises:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, the at least one program causes the at least one processor to implement intelligent productivity prediction method steps, as shown in fig. 2, the method steps comprising:
s101, according to the first uterine contraction signal data, performing temporary production state prediction processing;
and S102, according to the first uterine contraction signal data, performing temporary labor time prediction processing.
Wherein the step of the just-in-production state prediction processing comprises:
s1011, performing first preprocessing on the acquired first uterine contraction signal data to obtain a first to-be-detected data snapshot, inputting the first to-be-detected data snapshot into a just-in-production identification model network for prediction processing, and outputting a just-in-production state;
the step of the real-time prediction processing comprises the following steps:
and S1021, performing second preprocessing on the obtained first uterine contraction signal data to obtain a second data snapshot to be tested, inputting the second data snapshot to be tested into a temporary production time prediction model network for prediction, and outputting predicted temporary production time.
Specifically, when the current production state is predicted, the acquired 10-minute-length first uterine contraction signal data is intercepted every 5 seconds, so that 120 intercepted data segments are obtained, a first maximum value downsampling value processing is performed on each data segment, 49 numerical values arranged according to a time sequence are obtained from each data segment, the numerical values in each data segment are arranged and combined, a 7-by-7 matrix is arranged in a mode that every 7 numerical values are in a row in sequence, the matrix is a first to-be-measured data snapshot, and therefore 120 first to-be-measured data snapshots are obtained. Inputting the 120 first to-be-detected data snapshots into a constructed trained just-in-time discriminative model network, and finally outputting a predicted just-in-time state, wherein the output result comprises: the system is in production, not in production, abnormal data and the like, and sends out an alarm signal to prompt that a prediction result comes out; when the real-time production time is predicted, the acquired 10-minute long first uterine contraction signal data is intercepted every 5 seconds, so that 120 intercepted data segments are obtained, each data segment is subjected to second maximum value downsampling value processing, 2209 numerical values arranged according to the time sequence are obtained from each data segment, the numerical values in each data segment are arranged and combined, a matrix of 47 x 47 is arranged in a mode that each 47 numerical value is one row in sequence, the matrix is a second data snapshot to be detected, and therefore 120 second data snapshots to be detected are obtained. Inputting the 120 second snapshots to be tested into the constructed trained temporary production time prediction model network, and finally outputting the predicted temporary production time, wherein the output result comprises: time value, data abnormality, etc., and send out alarm signal to prompt the prediction result.
From the above, by using the embodiment of the invention, the pregnant woman can clearly understand the current antenatal state of the pregnant woman, so that the pregnant woman can be prepared to enter the delivery room in time, wrong decisions on certain reactions of the body due to insufficient experience can not be made, the young puerperae with insufficient experience does not need to excessively depend on the experience of elders to guide the pregnant woman and the baby, and a very great protection effect is achieved on the safety of the pregnant woman and the baby.
Further as a preferred embodiment of the system, the temporary discriminative model network is obtained by the following model training steps:
s201, constructing a temporary production identification model network, acquiring second uterine contraction signal data, and inputting the second uterine contraction signal data into the temporary production identification model network for network training;
specifically, there are two cases, one is: firstly, training a temporary production identification model network, and then training a temporary production time prediction model network; the other is as follows: the method comprises the steps of firstly training a temporary production time prediction model network and then training a temporary production identification model network.
The temporary production time prediction model network is obtained through the following model training steps:
s202, constructing a temporary production time prediction model network, acquiring third uterine contraction signal data, and inputting the third uterine contraction signal data into the temporary production time prediction model network for network training.
Therefore, by using the embodiment of the invention, the model network can be trained in a scientific mode, so that a scientific just-in-production identification model network and a just-in-production time prediction model network are obtained, the prediction is more accurate, and the method has practical significance.
Further as a preferred embodiment of the present system, the step S201 of constructing a temporary production identification model network specifically includes:
s2011, constructing a first-layer model network, wherein the first-layer model network comprises: the output end of the first convolution layer is connected with the input end of the first function excitation layer through the first normalization layer;
specifically, a first layer model network is constructed, the layer network comprises a conv convolution layer, a BN normalization layer and a function excitation layer formed by a ReLU activation function, wherein the convolution kernel of the conv layer is 7 × 120, and the output value of the first layer model network is a high-dimensional feature result.
S2012, constructing a second-layer model network, wherein the second-layer model network comprises: the output end of the second convolution layer is connected with the input end of the third convolution layer through the second normalization layer, the second function excitation layer and the first random inactivation layer in sequence, and the output end of the first function excitation layer is connected with the input end of the second convolution layer;
specifically, a second layer model network is constructed, the layer network comprises a conv convolution layer, a BN normalization layer, a function excitation layer formed by a ReLU activation function, a Dropout random inactivation layer and another conv convolution layer, the convolution kernel of the first conv is 5 x 120, the inactivation rate of Dropout is set to be 50%, the second convolution kernel is 3 x 60, the input value of the second layer model network is the output value of the first layer model network, and the output value of the second layer model network is the one-dimensional data characteristic result.
S2013, constructing a third-layer model network, wherein the third-layer model network comprises a memory unit layer group formed by connecting a plurality of memory unit layers in series, and the output end of the third convolution layer is connected with the input end of the memory unit layer group;
specifically, a third-layer model network is constructed, the layer network is composed of 5 same LSTM memory units connected in series layer by layer, the output dimension of the LSTM is 60, the input value of the third-layer model network is the output value of the second-layer model network, and the output value of the third-layer model network is the characteristic result of 60 dimensions.
S2014, constructing a fourth-layer model network, wherein the fourth-layer model network comprises: the output end of the second convolution layer is connected with the input end of the fourth function excitation layer through the third normalization layer, the third function excitation layer and the first full-connection layer in sequence, and the output end of the memory unit layer group is connected with the input end of the third normalization layer;
specifically, a fourth-layer model network is constructed, the network comprises a BN normalization layer, an excitation layer formed by a ReLU activation function, an FC full-link layer and a function excitation layer formed by a sigmoid activation function, the input value of the fourth-layer model network is the output value of the third-layer model network, and the output value of the fourth-layer model network is the output value of the just-in-production identification model network.
Further as a preferred embodiment of the system, the step S201 of inputting the second uterine contraction signal data to a temporary production identification model network for network training specifically includes:
s2015, sequentially performing first maximum value downsampling processing and first permutation combination processing on the second uterine contraction signal data to obtain a plurality of first data snapshots;
specifically, the second uterine contraction signal data is intercepted to be 10 minutes long, then the second uterine contraction signal data is intercepted every 5 seconds, so that 120 intercepted data segments are obtained, each data segment is subjected to first maximum value downsampling value processing, 49 numerical values arranged according to time sequence are obtained from each data segment, the numerical values in each data segment are arranged and combined, a matrix of 7 x 7 is arranged in a mode that every 7 numerical values are arranged in a row in sequence, the matrix is the first data snapshot to be measured, and therefore 120 first data snapshots are obtained.
S2016, inputting the first data snapshot and the temporary production label into a temporary production identification model network for first training treatment;
the step of inputting the first data snapshot and the on-line label into the on-line identification model network for the first training process comprises the following steps: inputting the first data snapshot and the temporary production label into a temporary production identification model network for processing to obtain a first characteristic result, judging whether the first characteristic result falls into a first temporary production characteristic interval, if so, the first temporary production characteristic interval is unchanged, otherwise, the first temporary production characteristic interval is expanded, and the first characteristic result falls into the expanded first temporary production characteristic interval;
s2017, the first training processing step is repeatedly executed until all the first data snapshots are trained;
specifically, the 120 first data snapshots are sequentially input to the next-in-production identification model network according to the time sequence to perform first training processing, and then the step S201 of inputting the second uterine contraction signal data to the next-in-production identification model network to perform network training is continuously performed until the output accuracy of the next-in-production identification model meets a preset condition (e.g., 95%), and at this time, the current first next-in-production characteristic interval is the optimal interval parameter of the next-in-production identification model network. Through a model network training mode, data gradually tend to consistent characteristics, and measured results are more representative and reasonable.
Further as a preferred embodiment of the present system, the step S202 of constructing a real-time predictive model network specifically includes:
s2021, constructing a first layer model network, wherein the first layer model network comprises: the output end of the fourth convolution layer is connected with the input end of the fifth function excitation layer through the fourth normalization layer;
specifically, a first layer model network is constructed, the layer network comprises a conv convolution layer, a BN normalization layer and a function excitation layer formed by a ReLU activation function, wherein the convolution kernel of the conv layer is 13 × 256, and the output value of the first layer model network is a high-dimensional feature result.
S2022, constructing a second-layer model network, wherein the second-layer model network comprises: the output end of the fifth convolution layer is connected with the input end of the sixth convolution layer through the fourth normalization layer, the sixth function excitation layer and the second random inactivation layer in sequence, and the output end of the fifth function excitation layer is connected with the input end of the fifth convolution layer;
specifically, a second layer model network is constructed, the layer network comprises a conv convolution layer, a BN normalization layer, a function excitation layer formed by a ReLU activation function, a Dropout random inactivation layer and another conv convolution layer, the convolution kernel of the first conv is 7 × 256, Dropout is set to be 50%, the convolution kernel of the second convolution kernel is 3 × 128, a branch connection is added, and the layer network is transformed into a residual block, namely, a max power shortcut mapping is performed from input to output. The input value of the second layer model network is the output value of the first layer model network, and the output value of the second layer model network is the one-dimensional data feature result.
S2023, constructing a third layer model network, where the third layer model network includes a residual block layer group composed of a plurality of identical second residual block layer connections, and each second residual block layer includes: the input end of the fifth normalization layer is connected with the output end of the eighth convolution layer through the seventh function excitation layer, the third random inactivation layer, the seventh convolution layer, the sixth normalization layer, the eighth function excitation layer and the fourth random inactivation layer in sequence, and the output end of the sixth convolution layer is connected with the input end of the residual block layer group;
specifically, a third layer model network is constructed, the network comprises 7 identical residual blocks which are connected in series, each residual block has a shortcut mapping of max power, the internal structure of each residual block comprises a BN normalization layer, a function excitation layer formed by a ReLU activation function, a Dropout random inactivation layer, a conv convolution layer, a BN normalization layer, a function excitation layer formed by another ReLU activation function, another Dropout random inactivation layer and another conv convolution layer, the number of conv convolution kernels of the same residual block is unchanged, the number of conv convolution kernels is reduced by half every time the conv convolution kernels pass through one residual block, the size of each conv convolution kernel is 3 x 3, the input value of the third layer model network is the output value of the second layer model network, and the output value of the third layer model network is a high-dimensional characteristic result.
S2024, constructing a fourth layer model network, wherein the fourth layer model network comprises: the input end of the normalization layer is connected with the output end of the tenth function excitation layer through the seventh normalization layer, the ninth function excitation layer and the second full-connection layer in sequence, and the output end of the residual block layer group is connected with the input end of the seventh normalization layer;
specifically, a fourth-layer model network is constructed, the network comprises a BN normalization layer, a function excitation layer formed by a ReLU activation function, an FC full-link layer and a Tanh activation function, the input value of the fourth-layer model network is the output value of the third-layer model network, and the output value of the fourth-layer model network is the output value of the just-in-production time prediction model network.
As a preferred embodiment of the system, the step S202 of inputting the third uterine contraction signal data to a temporary production time prediction model network for network training specifically includes:
s2025, sequentially carrying out second maximum value downsampling processing and second permutation and combination processing on the third uterine contraction signal data to obtain a plurality of second data snapshots;
specifically, the third uterine contraction signal data is intercepted to be 10 minutes long, then the third uterine contraction signal data is intercepted every 5 seconds, so that 120 intercepted data fragments are obtained, each data fragment is subjected to second maximum value downsampling value processing, 2209 numerical values arranged according to time sequence are obtained from each data fragment, the numerical values in each data fragment are arranged and combined, a matrix of 47 x 47 is arranged in a mode that each 47 numerical value is in a row in sequence, the matrix is the first data snapshot to be measured, and therefore 120 first data snapshots are obtained.
S2026, inputting the second data snapshot and the temporary production label into a temporary production time prediction model network for second training processing;
the step of inputting the second data snapshot and the imminent production label into the imminent production time prediction model network for second training processing comprises: inputting the second data snapshot and the on-line label into an on-line time prediction model network for processing to obtain a second feature result, judging whether the second feature result falls into a second on-line feature interval or not, if so, keeping the second on-line feature interval unchanged, otherwise, expanding the second on-line feature interval, and enabling the second feature result to fall into the expanded second on-line feature interval;
s2027, repeatedly executing the second training processing step until all the second data snapshots are trained;
specifically, 120 second data snapshots are sequentially input to the temporary production time prediction model network according to the time sequence for second training processing, and then the step S202 of inputting the third uterine contraction signal data to the temporary production time prediction model network for network training is continuously executed until the output accuracy of the temporary production identification model meets a preset condition (such as 95%), and at this time, the current second temporary production characteristic interval is the optimal interval parameter of the temporary production time prediction model network.
Further as a preferred embodiment of the present system, the method steps further comprise the steps of:
s301, acquiring pregnancy week data and amniotic fluid condition data;
and S302, when the current state is judged to be the labor status according to the pregnancy week data and the amniotic fluid status data, sending a labor pre-warning signal.
In particular, some of the simplest state identifications for pregnant women help to eliminate special cases and facilitate accurate prediction by the model.
Further as the preferred embodiment of this system, uterus flesh electricity gathers processing apparatus and is connected with intelligent terminal communication through bluetooth wireless communication mode. Through using the wireless communication mode of bluetooth, can make data acquisition more convenient.
Further as a preferred embodiment of the present system, the intelligent terminal includes a smart phone, an IPAD, a portable computer, and a desktop computer. The universality of the invention can be improved by using a mobile phone computer as an intelligent terminal.
Further, as a preferred embodiment of the system, the uterine myoelectric signal acquisition and processing device adopts a four-electrode myoelectric signal acquisition device to realize the acquisition of myoelectric signals. The four-electrode electromyographic signal acquisition device can acquire more accurate uterine contraction signals, and the prediction accuracy can be improved.
The contents in the above system embodiments are all applicable to the present apparatus embodiment, the functions specifically implemented by the present apparatus embodiment are the same as those in the above system embodiment, and the advantageous effects achieved by the present apparatus embodiment are also the same as those achieved by the above system embodiment.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. An intelligent labor monitoring system is characterized by comprising a uterine muscle electricity acquisition and processing device and an intelligent terminal, wherein the uterine muscle electricity acquisition and processing device is in communication connection with the intelligent terminal;
the uterine myoelectricity acquisition and processing device is used for acquiring myoelectricity signals and extracting first uterine contraction signal data from the acquired myoelectricity signals;
the intelligent terminal comprises:
at least one processor;
at least one memory for storing at least one program;
when executed by the at least one processor, cause the at least one processor to implement intelligent productivity prediction method steps comprising:
according to the first uterine contraction signal data, performing on-production state prediction processing and/or on-production time prediction processing;
wherein the real-time status prediction process comprises:
performing first preprocessing on the acquired first uterine contraction signal data to obtain a first to-be-detected data snapshot, and inputting the first to-be-detected data snapshot into a just-in-production identification model network for prediction processing so as to output a just-in-production state;
the real-time forecasting process comprises the following steps:
performing second preprocessing on the obtained first uterine contraction signal data to obtain a second data snapshot to be tested, inputting the second data snapshot to be tested into a just-in-production time prediction model network for prediction, and outputting predicted just-in-production time;
the temporary production identification model network is obtained through the following model training steps:
constructing a temporary production identification model network, acquiring second uterine contraction signal data, and inputting the second uterine contraction signal data into the temporary production identification model network for network training;
and/or the temporary production time prediction model network is obtained through the following model training steps:
constructing a temporary production time prediction model network, acquiring third uterine contraction signal data, and inputting the third uterine contraction signal data into the temporary production time prediction model network for network training;
the step of constructing the temporary production time prediction model network specifically comprises the following steps:
constructing a first layer model network, the first layer model network comprising: the output end of the fourth convolution layer is connected with the input end of the fifth function excitation layer through the fourth normalization layer;
constructing a second layer model network, the second layer model network comprising: the output end of the fifth convolution layer is connected with the input end of the sixth convolution layer through the fourth normalization layer, the sixth function excitation layer and the second random inactivation layer in sequence, and the output end of the fifth function excitation layer is connected with the input end of the fifth convolution layer;
constructing a third-level model network, wherein the third-level model network comprises a residual block layer group consisting of a plurality of identical second residual block layer connections, and each second residual block layer comprises: the input end of the fifth normalization layer is connected with the output end of the eighth convolution layer through the seventh function excitation layer, the third random inactivation layer, the seventh convolution layer, the sixth normalization layer, the eighth function excitation layer and the fourth random inactivation layer in sequence, and the output end of the sixth convolution layer is connected with the input end of the residual block layer group;
constructing a fourth layer model network, wherein the fourth layer model network comprises: the input end of the normalization layer is connected with the output end of the tenth function excitation layer through the seventh normalization layer, the ninth function excitation layer and the second full-connection layer in sequence, and the output end of the residual block layer group is connected with the input end of the seventh normalization layer.
2. The intelligent near-production monitoring system as claimed in claim 1, wherein the step of constructing a near-production identification model network specifically comprises:
constructing a first layer model network, the first layer model network comprising: the output end of the first convolution layer is connected with the input end of the first function excitation layer through the first normalization layer;
constructing a second layer model network, the second layer model network comprising: the output end of the second convolution layer is connected with the input end of the third convolution layer through the second normalization layer, the second function excitation layer and the first random inactivation layer in sequence, and the output end of the first function excitation layer is connected with the input end of the second convolution layer;
constructing a third-layer model network, wherein the third-layer model network comprises a memory unit layer group formed by connecting a plurality of memory unit layers in series, and the output end of the third convolution layer is connected with the input end of the memory unit layer group;
constructing a fourth layer model network, wherein the fourth layer model network comprises: the output end of the second convolution layer is connected with the input end of the fourth function excitation layer through the third normalization layer, the third function excitation layer and the first full-connection layer in sequence, and the output end of the memory unit layer group is connected with the input end of the third normalization layer.
3. The intelligent near-production monitoring system of claim 1, wherein the step of inputting the second uterine contraction signal data to a near-production discrimination model network for network training specifically comprises:
sequentially carrying out first maximum value downsampling processing and first permutation combination processing on the second uterine contraction signal data to obtain a plurality of first data snapshots;
inputting the first data snapshot and the on-production label into an on-production identification model network for first training processing;
the step of inputting the first data snapshot and the on-line label into the on-line identification model network for the first training process comprises the following steps: inputting the first data snapshot and the temporary production label into a temporary production identification model network for processing to obtain a first characteristic result, judging whether the first characteristic result falls into a first temporary production characteristic interval, if so, the first temporary production characteristic interval is unchanged, otherwise, the first temporary production characteristic interval is expanded, and the first characteristic result falls into the expanded first temporary production characteristic interval;
and repeatedly executing the first training processing step until all the first data snapshots are trained.
4. The intelligent labor threshold monitoring system of claim 1, wherein the step of inputting the third uterine contraction signal data to a labor threshold time prediction model network for network training specifically comprises:
sequentially carrying out second maximum value downsampling processing and second permutation and combination processing on the third uterine contraction signal data to obtain a plurality of second data snapshots;
inputting the second data snapshot and the on-production label into the on-production time prediction model network for second training processing;
the step of inputting the second data snapshot and the imminent production label into the imminent production time prediction model network for second training processing comprises: inputting the second data snapshot and the on-line label into an on-line time prediction model network for processing to obtain a second feature result, judging whether the second feature result falls into a second on-line feature interval or not, if so, keeping the second on-line feature interval unchanged, otherwise, expanding the second on-line feature interval, and enabling the second feature result to fall into the expanded second on-line feature interval;
and repeating the second training processing step until all the second data snapshots are trained.
5. The intelligent production monitoring system of claim 1 wherein the method steps further comprise the steps of:
acquiring gestational week data and amniotic fluid condition data;
and when the current state is judged to be the in-partum state according to the gestational week data and the amniotic fluid state data, sending out an in-partum early warning signal.
6. The intelligent labor attendance monitoring system as claimed in any one of claims 1-5 wherein the uterine muscle electricity collection and processing means is in communication connection with an intelligent terminal via Bluetooth wireless communication.
7. An intelligent production monitoring system according to any one of claims 1-5, wherein the intelligent terminal comprises a smartphone, an IPAD, a laptop and/or a desktop computer.
8. The intelligent labor monitoring system according to any one of claims 1-5, wherein the uterine myoelectricity collection processing device adopts a four-electrode myoelectricity signal collection device to realize the collection of myoelectricity signals.
CN201910162216.0A 2019-03-05 2019-03-05 Intelligent in-production monitoring system Active CN109875556B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910162216.0A CN109875556B (en) 2019-03-05 2019-03-05 Intelligent in-production monitoring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910162216.0A CN109875556B (en) 2019-03-05 2019-03-05 Intelligent in-production monitoring system

Publications (2)

Publication Number Publication Date
CN109875556A CN109875556A (en) 2019-06-14
CN109875556B true CN109875556B (en) 2021-08-31

Family

ID=66930546

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910162216.0A Active CN109875556B (en) 2019-03-05 2019-03-05 Intelligent in-production monitoring system

Country Status (1)

Country Link
CN (1) CN109875556B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111667916A (en) * 2020-06-09 2020-09-15 复旦大学 Machine learning-based antenatal uterine contraction judging system
CN112613581B (en) * 2020-12-31 2023-10-13 广州大学华软软件学院 Image recognition method, system, computer equipment and storage medium
CN113100711B (en) * 2021-04-08 2021-11-16 南通市妇幼保健院 Method and system for real-time tracking of health condition of parturient

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104812314A (en) * 2012-11-26 2015-07-29 辉凌公司 Method and system for diagnosing uterine contraction levels using image analysis
CN107307845A (en) * 2016-04-27 2017-11-03 广州翠格医疗技术有限公司 A kind of natural labor probabilistic forecasting devices and methods therefor
WO2019016759A1 (en) * 2017-07-19 2019-01-24 Bloom Technologies NV Monitoring uterine activity and assessing pre-term birth risk

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6898460B2 (en) * 2002-01-15 2005-05-24 General Electric Company Method and apparatus for uterine contraction monitoring using linear predictive modeling of abdominal surface EMG signals
WO2008073491A1 (en) * 2006-12-11 2008-06-19 University Of Florida Research Foundation, Inc. System and method for analyzing progress of labor and preterm labor

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104812314A (en) * 2012-11-26 2015-07-29 辉凌公司 Method and system for diagnosing uterine contraction levels using image analysis
CN107307845A (en) * 2016-04-27 2017-11-03 广州翠格医疗技术有限公司 A kind of natural labor probabilistic forecasting devices and methods therefor
WO2019016759A1 (en) * 2017-07-19 2019-01-24 Bloom Technologies NV Monitoring uterine activity and assessing pre-term birth risk

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
CLASSIFICATION OF MULTICHANNEL UTERINE EMG SIGNALS BY USING UNSUPERVISED COMPETITIVE LEARNING;Bassam MOSLEM等;《2011 IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS)》;20111201;第267-272页 *

Also Published As

Publication number Publication date
CN109875556A (en) 2019-06-14

Similar Documents

Publication Publication Date Title
CN109875556B (en) Intelligent in-production monitoring system
Cömert et al. Fetal hypoxia detection based on deep convolutional neural network with transfer learning approach
Georgoulas et al. Investigating pH based evaluation of fetal heart rate (FHR) recordings
Keceli et al. Development of a recurrent neural networks-based calving prediction model using activity and behavioral data
Miao et al. Cardiotocographic diagnosis of fetal health based on multiclass morphologic pattern predictions using deep learning classification
Imran Molla et al. Cardiotocogram data classification using random forest based machine learning algorithm
Spilka et al. Discriminating normal from “abnormal” pregnancy cases using an automated fhr evaluation method
Keith et al. 4 Development, evaluation and validation of an intelligent system for the management of labour
CN115170000A (en) Remote monitoring method and system based on electric energy meter communication module
CN108535759A (en) The remote radiation protecting, monitoring method, apparatus and system at million kilowatt nuclear power station
CN109645996B (en) Uterine contraction debilitation monitoring method and system, intelligent terminal and storage medium
CN113100711B (en) Method and system for real-time tracking of health condition of parturient
Almeida et al. Electrohysterography extracted features dependency on anthropometric and pregnancy factors
CN110208702A (en) A kind of lithium battery ultrasound information feature extraction method neural network based
CN112022172A (en) Pressure detection method and device based on multi-modal physiological data
Suwarno Performance evaluation of artificial neural network classifiers for predicting cesarean sections
CN116473556A (en) Emotion calculation method and system based on multi-site skin physiological response
CN116385925A (en) Intelligent safety management method and system for production site
CN115758240A (en) Livestock health state intelligent classification method, device, equipment and storage medium
Anggraeny et al. Modeled early detection of pregnancy risk based on Poedji Rochjati score card using relief and neural network
Kapila et al. Optimizing fetal health prediction: Ensemble modeling with fusion of feature selection and extraction techniques for cardiotocography data
CN112370039A (en) Classroom quality detection method and device based on intelligent classroom
Devoe et al. Predicting the duration of the first stage of spontaneous labor using a neural network
de Vries et al. Contrastive predictive coding for anomaly detection of fetal health from the cardiotocogram
CN111026608A (en) Oracle database performance diagnosis method and system, equipment and medium thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant