CN112101100A - Self-service biological specimen collecting system and method - Google Patents

Self-service biological specimen collecting system and method Download PDF

Info

Publication number
CN112101100A
CN112101100A CN202010783555.3A CN202010783555A CN112101100A CN 112101100 A CN112101100 A CN 112101100A CN 202010783555 A CN202010783555 A CN 202010783555A CN 112101100 A CN112101100 A CN 112101100A
Authority
CN
China
Prior art keywords
playing
action
wiping
personnel
standard
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010783555.3A
Other languages
Chinese (zh)
Other versions
CN112101100B (en
Inventor
高鹏飞
张景全
孙启东
孟祥瑞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anshan Jizhichuangxin Science And Technology Co ltd
Original Assignee
Anshan Jizhichuangxin Science And Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anshan Jizhichuangxin Science And Technology Co ltd filed Critical Anshan Jizhichuangxin Science And Technology Co ltd
Priority to CN202010783555.3A priority Critical patent/CN112101100B/en
Publication of CN112101100A publication Critical patent/CN112101100A/en
Application granted granted Critical
Publication of CN112101100B publication Critical patent/CN112101100B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • G06Q50/2057Career enhancement or continuing education service
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Tourism & Hospitality (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention provides a self-service biological specimen collecting system and a self-service biological specimen collecting method, wherein the collecting system comprises a case, an industrial personal computer, a camera, a display, an identity card reader and a bar code printer; the industrial personal computer is placed in the case, a camera is installed at the upper end of the front part of the case, a display, an identity card reader and a bar code printer are installed at the front part of the case, an identity card or a manual name is input through the identity card reader, and an information bar code is printed through the bar code printer; playing a biological sample sampling teaching animation of a person through a display; storing the sampling process of the whole personnel through a camera of the system; and judging whether the action is a standard acquisition action or not through the data of each frame acquired by the system camera. The biological sample sampling is conducted on the entering personnel through the teaching video playing, meanwhile, the camera of the system can record the collecting action of the personnel, and the software of the system can judge the collecting action of the personnel in real time to assist in ensuring the effectiveness of the collected biological sample.

Description

Self-service biological specimen collecting system and method
Technical Field
The invention relates to the technical field of sample collection, in particular to a self-service biological sample collection system and a collection method.
Background
In the pig raising industry, in order to strictly standardize personnel management, before personnel enter a pig farm, the personnel entering the pig farm need to carry out biological sample collection. At present, the collection work at home and abroad is demonstrated by workers to collect actions for field personnel. The invention applies computer teaching and artificial intelligence algorithm assistance to replace manual work, thereby improving the working efficiency and saving the cost.
Disclosure of Invention
In order to solve the technical problems in the background art, the invention provides a biological sample self-service collection system and a biological sample self-service collection method.
In order to achieve the purpose, the invention adopts the following technical scheme:
a self-service biological specimen collecting system comprises a case, an industrial personal computer, a camera, a display, an identity card reader and a bar code printer; the industrial personal computer is placed in the case, the upper end of the front part of the case is provided with the camera, the front part of the case is provided with the display, the identity card reader and the bar code printer, and before personnel enter the pig farm, the camera, the display, the identity card reader and the bar code printer are all electrically connected with the industrial personal computer, and biological samples of the personnel entering the farm are collected through the collection system;
the acquisition system comprises an input/output module, a playing module, a recording module and an identification module;
an input-output module: inputting an identity card or manually inputting a name through an identity card reader, and printing an information bar code through a bar code printer;
a playing module: playing a biological sample sampling teaching animation of a person through a display;
a recording module: storing the sampling process of the whole personnel through a camera of the system;
an identification module: and judging whether the action is a standard acquisition action or not through the data of each frame acquired by the system camera.
The collection method of the self-service biological specimen collection system is characterized in that before a person enters a pig farm, the collection system collects biological specimens of the person entering the pig farm, and the collection method comprises the following steps:
step one, inputting an identity card through an identity card reader or manually inputting name authentication identity information;
printing an information bar code to prompt personnel to enter a sampling area; and dispensing gloves for wiping;
step three, the tested person stands to the designated position in front of the camera;
playing a biological sample of a person to sample teaching animation and matching with voice, executing actions of the person to be tested according to the teaching animation, acquiring videos by an acquisition system through a camera, storing the videos by an industrial personal computer and judging whether the actions of the person to be tested are effective or not;
1) playing standard action animation for wiping hair and matching with voice, wherein the playing time meets the hair wiping time requirement of personnel; judging whether the action of the tested person is effective or not, if so, playing the next action, and if not, playing again;
2) playing standard action animation for wiping the face and matching with voice, wherein the standard action animation comprises a forehead, a face and a mouth, and the playing time meets the face wiping time requirement of personnel; judging whether the action of the tested person is effective or not, if so, playing the next action, and if not, playing again;
3) playing standard action animation for wiping ears and matching with voice, wherein the standard action animation comprises ears and behind ears, and the playing time meets the ear wiping time requirement of personnel; judging whether the action of the tested person is effective or not, if so, playing the next action, and if not, playing again;
4) playing standard motion animation of the wiping neck and matching with voice, wherein the standard motion animation comprises the neck and a back neck, and the playing time meets the requirement of the wiping neck time of a person; judging whether the action of the tested person is effective or not, if so, playing the next action, and if not, playing again;
5) playing standard action animation for wiping bare skin and matching with voice, wherein the standard action animation comprises arms, calves and the like, and the playing time meets the requirements of the personnel on the arm and calves wiping time; judging whether the action of the tested person is effective or not, if so, playing the next action, and if not, playing again;
6) playing standard action animations of the hand to be cleaned and matching with voices, wherein the standard action animations comprise palms, hand backs and nail seams, and the playing time meets the hand cleaning time requirement of personnel; judging whether the action of the tested person is effective or not, if so, playing the next action, and if not, playing again;
7) playing standard action animations for wiping the front part and the rear part of the jacket and matching with voice, wherein the playing time meets the time requirement of a person on wiping the jacket; judging whether the action of the tested person is effective or not, if so, playing the next action, and if not, playing again;
step five, if the actions of the tested person are judged to be effective, prompting to place sampling gloves;
step six, the gloves for wiping are sent for inspection; the personnel can enter the inspection system after the inspection is qualified.
In the fourth step, the method for judging whether the action of the tested person is effective comprises the following steps:
step 401, generating a human skeleton map for the acquired video by using an Openpos open source library;
step 402, establishing a model of motion recognition:
1) making a data set: two data feature sets were made, one of which: converting 1000 videos of standard actions into a series of skeleton images by using an open source library of Openpos, selecting 30 standard images corresponding to action categories from each video generated skeleton image, and making the standard images into labels, wherein 1000 times of the standard images are required to be processed like the above to form a dataset; the second step is as follows: calculating the angle and distance of each joint point to form a sequence of characteristic one-dimensional vectors;
2) establishing a model: a deep learning BP network and a convolution neural network fusion model are used;
3) training a model: inputting the data of the picture training set into a convolution model, training shared parameters in the model, debugging the parameters until the loss function is converged, and finishing the training with the accuracy rate of more than 97%; inputting a feature vector of a bit sequence into a training model as a BP network;
step 403, inputting each frame of the sampled motion video into the motion recognition model, and determining whether the series of motions are valid.
Compared with the prior art, the invention has the beneficial effects that:
1) the system applies computer teaching and artificial intelligence algorithm assistance to replace manual work, guides the entering personnel to sample the biological samples by playing teaching videos, simultaneously records the acquisition actions of the personnel by a camera of the system, and judges the acquisition actions of the personnel in real time by software of the system to assist in ensuring the effectiveness of the acquired biological samples, thereby improving the working efficiency and saving the cost.
2) And the two models are fused, and the two results are fused into one result to judge whether the action is correct or not, so that the model accuracy is improved, and the judgment of the data interference of the wild values is prevented.
Drawings
FIG. 1 is a schematic diagram of a self-service biological specimen collection system according to the present invention;
FIG. 2 is a standard motion animation screenshot of a wipe of hair of the present invention;
FIG. 3 is a standard motion animation screenshot of a wipe of a face of the present invention;
FIG. 4 is a standard motion animation screenshot of the present invention wiping an ear;
FIG. 5 is a standard motion animation screenshot of a wiper neck of the present invention;
FIG. 6 is a standard motion animation screenshot of the present invention wiping bare skin;
FIG. 7 is a standard motion animation screenshot of a wiping hand of the present invention;
FIG. 8 is a standard motion animation screenshot of a swabbing jacket of the present invention;
FIG. 9 is a diagram of human skeleton generated using the Openpos open source library developed by the university of Carnikolong (CMU) of U.S.A.;
FIG. 10 is a schematic diagram of a BP network;
FIG. 11 is a diagram of a convolutional neural network;
FIG. 12 is a feature diagram of an action;
FIG. 13 is a diagram of a human skeleton depicting coordinates of joint points of the human skeleton;
FIG. 14 is a graph of a loss function;
fig. 15 is a graph of accuracy.
In the figure: 1-camera 2-display 3-ID card reader 4-bar code printer 5-industrial computer 6-cabinet.
Detailed Description
The following detailed description of the present invention will be made with reference to the accompanying drawings.
As shown in fig. 1, a self-service biological specimen collecting system comprises a case 6, an industrial personal computer 5, a camera 1, a display 2, an identity card reader 3 and a bar code printer 4; the industrial personal computer 5 is placed in the case 6, the camera 1 is installed at the upper end of the front portion of the case 6, the display 2, the identity card reader 3 and the bar code printer 4 are installed at the front portion of the case 6, and the camera 1, the display 2, the identity card reader 3 and the bar code printer 4 are all electrically connected with the industrial personal computer 5. Before entering the pig farm, the personnel enter the pig farm to carry out biological sample collection through the collection system.
The acquisition system comprises an input/output module, a playing module, a recording module and an identification module;
an input-output module: inputting an identity card or manually inputting a name through an identity card reader 3, and printing an information bar code through a bar code printer 4;
a playing module: playing a biological sample sampling teaching animation of a person through the display 2;
a recording module: the sampling process of the whole personnel is stored through a camera 1 of the system;
an identification module: whether the action is a standard acquisition action is judged through the data of each frame acquired by the system camera 1.
The collection method of the self-service biological specimen collection system is characterized in that before a person enters a pig farm, the collection system collects biological specimens of the person entering the pig farm, and the collection method comprises the following steps:
step one, inputting an identity card or manually inputting name authentication identity information through an identity card reader 3;
printing an information bar code to prompt personnel to enter a sampling area; and dispensing gloves for wiping;
step three, the tested person stands to the designated position in front of the camera 1;
step four, as shown in fig. 2-8, playing a biological sample sampling teaching animation of a person and matching with voice, executing action by the person to be tested according to the teaching animation, acquiring video by an acquisition system through a camera 1, storing the video by an industrial personal computer 5 and judging whether the action of the person to be tested is effective;
1) playing standard action animation for wiping hair and matching with voice, wherein the playing time meets the hair wiping time requirement of personnel; judging whether the action of the tested person is effective or not, if so, playing the next action, and if not, playing again;
2) playing standard action animation for wiping the face and matching with voice, wherein the standard action animation comprises a forehead, a face and a mouth, and the playing time meets the face wiping time requirement of personnel; judging whether the action of the tested person is effective or not, if so, playing the next action, and if not, playing again;
3) playing standard action animation for wiping ears and matching with voice, wherein the standard action animation comprises ears and behind ears, and the playing time meets the ear wiping time requirement of personnel; judging whether the action of the tested person is effective or not, if so, playing the next action, and if not, playing again;
4) playing standard motion animation of the wiping neck and matching with voice, wherein the standard motion animation comprises the neck and a back neck, and the playing time meets the requirement of the wiping neck time of a person; judging whether the action of the tested person is effective or not, if so, playing the next action, and if not, playing again;
5) playing standard action animation for wiping bare skin and matching with voice, wherein the standard action animation comprises arms, calves and the like, and the playing time meets the requirements of the personnel on the arm and calves wiping time; judging whether the action of the tested person is effective or not, if so, playing the next action, and if not, playing again;
6) playing standard action animations of the hand to be cleaned and matching with voices, wherein the standard action animations comprise palms, hand backs and nail seams, and the playing time meets the hand cleaning time requirement of personnel; judging whether the action of the tested person is effective or not, if so, playing the next action, and if not, playing again;
7) playing standard action animations for wiping the front part and the rear part of the jacket and matching with voice, wherein the playing time meets the time requirement of a person on wiping the jacket; judging whether the action of the tested person is effective or not, if so, playing the next action, and if not, playing again;
step five, if the actions of the tested person are judged to be effective, prompting to place sampling gloves;
step six, the gloves for wiping are sent for inspection; the personnel can enter the inspection system after the inspection is qualified.
In the fourth step, the method for judging whether the action of the tested person is effective comprises the following steps:
step 401, generating a human skeleton map for the acquired video by using an Openpos open source library;
step 402, establishing a model of motion recognition:
1) making a data set: two data feature sets were made, one of which: generating human skeleton images of the acquired videos by utilizing an Openpos open source library developed by the university of Cambridge Meilong (CMU) in the United states, and selecting standard pictures corresponding to the motion classification in each video generated skeleton image as shown in FIG. 9, wherein the standard pictures are made into labels, and the labeling is performed 1000 times to form a dataset; the second step is as follows: calculating the angle and distance of each joint point to form a sequence of characteristic one-dimensional vectors;
2) establishing a model: a deep learning BP network and a convolution neural network fusion model are used;
the principle of the BP network is shown in fig. 10, the convolutional neural network is shown in fig. 11, each neuron operator of the neural network solves a linear regression problem, and a nonlinear classification problem is realized through an algorithm among a plurality of neurons.
The manufacturing process of the model comprises the following steps:
extracting a characteristic value: we get the 3D coordinates of the human skeleton through a specific camera, characterizing some actions by a combination of angles and distances between some joint points, see fig. 12.
3) Training a model: inputting the data of the picture training set into the convolution model, training shared parameters in the model, debugging the parameters until the loss function is converged, and completing the training with the accuracy rate of more than 97% as shown in FIGS. 14-15; inputting a feature vector of a bit sequence into a training model as a BP network;
training the mathematical principle of the model (see model):
data bits W1, W2...., Wn of the input model, weight bits X1, X2,........., Xn of the connecting edges, yield the neuron outputs of the next layer as:
z=W1X1+W2X2+..........+WnXn+b
we solve the problem of non-linear classification, so an activation function is added to convert a linear function into a non-linear discrete output, such as Sigmod, tanh, ReLU functions. Adding a softtmax classifier (probability algorithm, all the classifiers are equal to 1) before the last layer, outputting the probability of each classification at an output layer, comparing the probability value with a label value set when a data set is made, then calculating the variance, solving the partial derivatives of W and b to forward propagate, and modifying the weight of the model. And repeatedly modifying the model weight to obtain the model with the highest accuracy.
And inputting the characteristic value to obtain the probability of action classification to judge the sampling action.
See fig. 11, convolutional neural network: the input to the convolutional network is in the form of a picture: the human skeleton map is drawn by obtaining the coordinates of the joint points of the human skeleton, as shown in fig. 13.
The convolutional neural network is added with a convolutional layer on the basis of a BP network, so that the convolutional neural network is more suitable for a model taking pictures as input, the parameter scale can be reduced, and the calculation speed is increased.
Training the convolutional neural network: convolution the output of the convolution algorithm, down sampling algorithm, is non-linearly mapped by an activation function, such as (ReLU). And finally, calculating an output probability comparison label value by using the classification algorithm Softmax same as the classification algorithm Softmax, and continuously updating the weight parameter by using variance to solve the partial derivative, wherein the process is the same as that of the BP neural network.
Therefore, two networks are obtained, each input picture corresponds to a group of characteristic values, the two input values are respectively input into the model, the probability of each action is calculated, and the two results are fused into one result to judge whether the action is correct or not. (benefits of model fusion: improving model accuracy, preventing outlier data from interfering with judgment)
Step 403, inputting each frame of the sampled motion video into the motion recognition model, and determining whether the series of motions are valid.
The above embodiments are implemented on the premise of the technical solution of the present invention, and detailed embodiments and specific operation procedures are given, but the scope of the present invention is not limited to the above embodiments. The methods used in the above examples are conventional methods unless otherwise specified.

Claims (3)

1. A self-service biological specimen collecting system is characterized in that the collecting system comprises a case, an industrial personal computer, a camera, a display, an identity card reader and a bar code printer; the industrial personal computer is placed in the case, the upper end of the front part of the case is provided with the camera, the front part of the case is provided with the display, the identity card reader and the bar code printer, the camera, the display, the identity card reader and the bar code printer are all electrically connected with the industrial personal computer before personnel enter the pig farm, and the personnel enter the pig farm and are subjected to biological sample collection through the collection system;
the acquisition system comprises an input/output module, a playing module, a recording module and an identification module;
an input-output module: inputting an identity card or manually inputting a name through an identity card reader, and printing an information bar code through a bar code printer;
a playing module: playing a biological sample sampling teaching animation of a person through a display;
a recording module: storing the sampling process of the whole personnel through a camera of the system;
an identification module: and judging whether the action is a standard acquisition action or not through the data of each frame acquired by the system camera.
2. The collection method of the self-service biological specimen collection system of claim 1, wherein before entering a pig farm, the collection system collects biological specimens of the entering personnel, and the collection method comprises the following steps:
step one, inputting an identity card through an identity card reader or manually inputting name authentication identity information;
printing an information bar code to prompt personnel to enter a sampling area; and dispensing gloves for wiping;
step three, the tested person stands to the designated position in front of the camera;
playing a biological sample of a person to sample teaching animation and matching with voice, executing actions of the person to be tested according to the teaching animation, acquiring videos by an acquisition system through a camera, storing the videos by an industrial personal computer and judging whether the actions of the person to be tested are effective or not;
1) playing standard action animation for wiping hair and matching with voice, wherein the playing time meets the hair wiping time requirement of personnel; judging whether the action of the tested person is effective or not, if so, playing the next action, and if not, playing again;
2) playing standard action animation for wiping the face and matching with voice, wherein the standard action animation comprises a forehead, a face and a mouth, and the playing time meets the face wiping time requirement of personnel; judging whether the action of the tested person is effective or not, if so, playing the next action, and if not, playing again;
3) playing standard action animation for wiping ears and matching with voice, wherein the standard action animation comprises ears and behind ears, and the playing time meets the ear wiping time requirement of personnel; judging whether the action of the tested person is effective or not, if so, playing the next action, and if not, playing again;
4) playing standard motion animation of the wiping neck and matching with voice, wherein the standard motion animation comprises the neck and a back neck, and the playing time meets the requirement of the wiping neck time of a person; judging whether the action of the tested person is effective or not, if so, playing the next action, and if not, playing again;
5) playing standard action animation for wiping bare skin and matching with voice, wherein the standard action animation comprises arms, calves and the like, and the playing time meets the requirements of the personnel on the arm and calves wiping time; judging whether the action of the tested person is effective or not, if so, playing the next action, and if not, playing again;
6) playing standard action animations of the hand to be cleaned and matching with voices, wherein the standard action animations comprise palms, hand backs and nail seams, and the playing time meets the hand cleaning time requirement of personnel; judging whether the action of the tested person is effective or not, if so, playing the next action, and if not, playing again;
7) playing standard action animations for wiping the front part and the rear part of the jacket and matching with voice, wherein the playing time meets the time requirement of a person on wiping the jacket; judging whether the action of the tested person is effective or not, if so, playing the next action, and if not, playing again;
step five, if the actions of the tested person are judged to be effective, prompting to place sampling gloves;
step six, the gloves for wiping are sent for inspection; the personnel can enter the inspection system after the inspection is qualified.
3. The collection method of the self-service biological specimen collection system according to claim 2, wherein in the fourth step, the method for judging whether the actions of the tested person are effective comprises the following steps:
step 401, generating a human skeleton map for the acquired video by using an Openpos open source library;
step 402, establishing a model of motion recognition:
1) making a data set: two data feature sets were made, one of which: converting 1000 videos of standard actions into a series of skeleton images by using an open source library of Openpos, selecting 30 standard images corresponding to action categories from each video generated skeleton image, and making the standard images into labels, wherein 1000 times of the standard images are required to be processed like the above to form a dataset; the second step is as follows: calculating the angle and distance of each joint point to form a sequence of characteristic one-dimensional vectors;
2) establishing a model: a deep learning BP network and a convolution neural network fusion model are used;
3) training a model: inputting the data of the picture training set into a convolution model, training shared parameters in the model, debugging the parameters until the loss function is converged, and finishing the training with the accuracy rate of more than 97%; inputting a feature vector of a bit sequence into a training model as a BP network;
step 403, inputting each frame of the sampled motion video into the motion recognition model, and determining whether the series of motions are valid.
CN202010783555.3A 2020-08-06 2020-08-06 Biological specimen self-help collection system and collection method Active CN112101100B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010783555.3A CN112101100B (en) 2020-08-06 2020-08-06 Biological specimen self-help collection system and collection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010783555.3A CN112101100B (en) 2020-08-06 2020-08-06 Biological specimen self-help collection system and collection method

Publications (2)

Publication Number Publication Date
CN112101100A true CN112101100A (en) 2020-12-18
CN112101100B CN112101100B (en) 2024-03-15

Family

ID=73750401

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010783555.3A Active CN112101100B (en) 2020-08-06 2020-08-06 Biological specimen self-help collection system and collection method

Country Status (1)

Country Link
CN (1) CN112101100B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114841990A (en) * 2022-05-26 2022-08-02 长沙云江智科信息技术有限公司 Self-service nucleic acid collection method and device based on artificial intelligence

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120022890A1 (en) * 2006-12-13 2012-01-26 Barry Williams Method and apparatus for a self-service kiosk system for collecting and reporting blood alcohol level
CN105738358A (en) * 2016-04-01 2016-07-06 杭州赛凯生物技术有限公司 Drug detection system
CN105868575A (en) * 2016-05-13 2016-08-17 罗守军 Multifunctional self-service machine for body fluid collection
CN106156517A (en) * 2016-07-22 2016-11-23 广东工业大学 The self-service automatic checkout system in a kind of human body basic disease community
CN110265130A (en) * 2019-07-12 2019-09-20 重庆微标惠智医疗信息技术有限公司 A kind of intelligent movable blood sampling management system and method
CN110309801A (en) * 2019-07-05 2019-10-08 名创优品(横琴)企业管理有限公司 A kind of video analysis method, apparatus, system, storage medium and computer equipment
CN110929711A (en) * 2019-11-15 2020-03-27 智慧视通(杭州)科技发展有限公司 Method for automatically associating identity information and shape information applied to fixed scene
CN111275943A (en) * 2020-02-27 2020-06-12 中国人民解放军陆军特色医学中心 Wearable nursing operation flow monitoring devices

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120022890A1 (en) * 2006-12-13 2012-01-26 Barry Williams Method and apparatus for a self-service kiosk system for collecting and reporting blood alcohol level
CN105738358A (en) * 2016-04-01 2016-07-06 杭州赛凯生物技术有限公司 Drug detection system
CN105868575A (en) * 2016-05-13 2016-08-17 罗守军 Multifunctional self-service machine for body fluid collection
CN106156517A (en) * 2016-07-22 2016-11-23 广东工业大学 The self-service automatic checkout system in a kind of human body basic disease community
CN110309801A (en) * 2019-07-05 2019-10-08 名创优品(横琴)企业管理有限公司 A kind of video analysis method, apparatus, system, storage medium and computer equipment
CN110265130A (en) * 2019-07-12 2019-09-20 重庆微标惠智医疗信息技术有限公司 A kind of intelligent movable blood sampling management system and method
CN110929711A (en) * 2019-11-15 2020-03-27 智慧视通(杭州)科技发展有限公司 Method for automatically associating identity information and shape information applied to fixed scene
CN111275943A (en) * 2020-02-27 2020-06-12 中国人民解放军陆军特色医学中心 Wearable nursing operation flow monitoring devices

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SL MAKO等: "Microbiological Quality of Packaged Ice from Various Sources in Georgia", 《JOURNAL OF FOOD PROTECTION》, vol. 77, no. 09, pages 1546 - 1553 *
罗守军等: "智能化门诊体液采集自助机的设计与应用", 中国现代医生, vol. 58, no. 11, pages 170 - 173 *
臧德年: "浅谈猪场生物安全", 《畜禽业》, vol. 29, no. 11, pages 47 - 48 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114841990A (en) * 2022-05-26 2022-08-02 长沙云江智科信息技术有限公司 Self-service nucleic acid collection method and device based on artificial intelligence

Also Published As

Publication number Publication date
CN112101100B (en) 2024-03-15

Similar Documents

Publication Publication Date Title
CN106919903B (en) robust continuous emotion tracking method based on deep learning
CN112800903B (en) Dynamic expression recognition method and system based on space-time diagram convolutional neural network
Bourel et al. Robust facial expression recognition using a state-based model of spatially-localised facial dynamics
CN109086706B (en) Motion recognition method based on segmentation human body model applied to human-computer cooperation
CN106980811A (en) Facial expression recognizing method and expression recognition device
CN111814661A (en) Human behavior identification method based on residual error-recurrent neural network
CN111700608A (en) Multi-classification method and device for electrocardiosignals
Liu et al. New research advances of facial expression recognition
CN111652167A (en) Intelligent evaluation method and system for chromosome karyotype image
CN108596256B (en) Object recognition classifier construction method based on RGB-D
CN111680550A (en) Emotion information identification method and device, storage medium and computer equipment
CN116246338B (en) Behavior recognition method based on graph convolution and transducer composite neural network
CN111680660A (en) Human behavior detection method based on multi-source heterogeneous data stream
CN113705445A (en) Human body posture recognition method and device based on event camera
Chopin et al. Human motion prediction using manifold-aware wasserstein gan
CN112101100B (en) Biological specimen self-help collection system and collection method
CN114694174A (en) Human body interaction behavior identification method based on space-time diagram convolution
CN114663807A (en) Smoking behavior detection method based on video analysis
Du The computer vision simulation of athlete’s wrong actions recognition model based on artificial intelligence
CN113536926A (en) Human body action recognition method based on distance vector and multi-angle self-adaptive network
CN114373146B (en) Parametric action recognition method based on skeleton information and space-time characteristics
CN112613405B (en) Method for recognizing actions at any visual angle
CN114639140A (en) Method and system for evaluating physical and mental states of old people in real time based on facial expression recognition
Mobini et al. Stargan based facial expression transfer for anime characters
CN111523406A (en) Deflection face correcting method based on generation of confrontation network improved structure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant