CN106650709A - Sensor data-based deep learning step detection method - Google Patents

Sensor data-based deep learning step detection method Download PDF

Info

Publication number
CN106650709A
CN106650709A CN201710052675.4A CN201710052675A CN106650709A CN 106650709 A CN106650709 A CN 106650709A CN 201710052675 A CN201710052675 A CN 201710052675A CN 106650709 A CN106650709 A CN 106650709A
Authority
CN
China
Prior art keywords
data
image
frame
sequence
classification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201710052675.4A
Other languages
Chinese (zh)
Inventor
夏春秋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Vision Technology Co Ltd
Original Assignee
Shenzhen Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Vision Technology Co Ltd filed Critical Shenzhen Vision Technology Co Ltd
Priority to CN201710052675.4A priority Critical patent/CN106650709A/en
Publication of CN106650709A publication Critical patent/CN106650709A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Psychiatry (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a sensor data-based deep learning step detection method. The method comprises the main contents of data input, modal transfer, transfer learning and image classification, and comprises the following steps of: firstly pre-processing a gait data set by adoption of a pre-trained convolutional neural network model, and re-adjusting the size to be 229*229 after noise separation; fitting a bounding box to slice pre-processed images; carrying out image extraction by utilizing a maximum frame method, an average method and a sequence analysis method; and carrying out transfer learning on the extracted images by a pre-trained Inception-v3 model, and obtaining a result after classification. According to the method, the pre-trained network model is adopted, so that plenty of calculation resources and time are saved; by utilizing the concept of transfer learning, the limitation that the other tasks cannot be learned when various flag-free data sets are executed is avoided; and the obtained classification precision is about 90% and is 12% higher than that of the conventional machine learning method.

Description

A kind of deep learning step detection method based on sensing data
Technical field
The present invention relates to computer vision field, more particularly, to a kind of deep learning step based on sensing data Detection method.
Background technology
As science and technology are developed rapidly, convolutional neural networks have become most advanced in various Computer Vision Tasks Technology.Sensor present in everyday environments produces substantial amounts of data, and they are provided for activity recognition and context The information of sensor model.Using deep learning method from original sensor data extract useful information, can effectively perform classification, The identification task related to segmentation, but these substantial amounts of flag datas of technologies needs are in order to the network of training these very deep, And for various other assignments are still without the data set of many marks.And there is the data type for being visually difficult to explain, such as Sensing data.And if using the deep learning step detection method based on sensing data, then transfer learning can be utilized With the thought of mode transfer, sensing data is moved to after image area, effectively classification step image, can also be applied to such as Automatic monitoring in intelligent environment and healthy scene, such as running, sleep, the monitoring of walking body movement, in analysing gait pattern is raw Thing movement monitoring such as breathing detection, feed detection etc..
The present invention proposes a kind of deep learning step detection method based on sensing data, and it adopts the volume of pre-training Product neural network model, pre-processes first to gait data collection, after burbling noise, readjusts size for 229 × 229; Next, fitting bounding box cuts pretreated image;Then, carried out using largest frames method, the method for average and sequence analysis Image zooming-out;The image that the last Inception-v3 models transfer learning by pre-training is extracted, obtains sorted result.This Invention saves substantial amounts of computing resource and time due to the network model using pre-training;Using the concept of transfer learning, so as to Avoid performing various restrictions of other tasks without calligraphy learning without flag data collection;It is left that the nicety of grading of acquisition reaches 90% The right side, better than regular machinery learning method more than 12%.
The content of the invention
Difficult and data are trained to be difficult the problem of Visual Explanation for network model, it is an object of the invention to provide one Plant based on the deep learning step detection method of sensing data.
To solve the above problems, the present invention provides a kind of deep learning step detection method based on sensing data, its Main contents include:
(1) data input;
(2) mode migration;
(3) transfer learning;
(4) image classification.
Wherein, a kind of deep learning step detection method based on sensing data, using pressure sensor data, is regarding The data type explained is difficult in feel, and it is not clear whether can be with Visual Explanation;Sensor mode is moved into image shape The vision territory of formula, and the depth convolutional neural networks using training in advance recognize dimension sensor data;Dimension sensor Output moves to pressure distribution imaging, realizes that mode is migrated, and obtains the view data for migrating;Using the convolutional Neural of training in advance Network carries out transfer learning to the view data for migrating, so as to perform step detection, identification mission.
Wherein, described data input, choose by people walk on the quick matrix of pressure acquisition step data as gait Data set, the data set is made up of the step sample of 13 people;2-3 step is recorded in everyone each walking sequence, often People at least records 12 samples;Each walking sequence is the independent data sequence of the ID marks with a specific people, and ID is defined The class label of convolutional neural networks, altogether including 529 steps.
Wherein, described mode migration, including pretreatment, place normalization and image zooming-out;By the original number of sensor Linearly migrate as gray scale chromaticity diagram according to the time series of, i.e., 120 × 54 two-dimensional pressures mapping, wherein each pixel represents sense Know a little, brighter color corresponds to higher pressure;Complete step by pressure mapping frame Sequence composition, each frame correspondence pin The a certain moment of step;Split each step along time dimension and find the independent moment of each step;Other sensors number According to the thinking for being equally applicable above-mentioned mode migration.
Further, described pretreatment, by by the migration of each frame for binary frame and using adaptive threshold by Step is separated with ambient noise;For threshold value, the histogram that the pixel value Classified into groups number of frame is 10, and threshold value are determined For the central value of the next group of peak group.
Further, described place normalization, finds first the maximum boundary frame of all frames, and it surrounds each independent pin Step, it is ensured that all moment for belonging to same step are all surrounded by the bounding box;For same step, using the border of formed objects Frame capturing and extract all of moment, so as to cut incoherent part using bounding box.
Further, described image zooming-out, after fitting bounding box largest frames method, the method for average and sequence analysis are adopted Carry out image zooming-out;
Largest frames method, from the frame sequence of each sample largest frames are captured, and are migrated as corresponding image and use class ID Mark it;Concentrate from gait data and extract the image after 529 mode migrations altogether;
The method of average, to the sequence of single sample in all frames carry out average operation, and find the correspondence of average pixel value Image;Average frame carries the temporal information at all moment of step, contributes to setting up more effective characteristic set;
Sequence analysis, the institute using the frame sequence of sample is important and they are migrated into image;Each frame carries former Initial value, and granularities more more than above two methods are provided;
The classification results that test is obtained show that the precision highest reached using sequence analysis method reaches 90% or so.
Wherein, described transfer learning, using Inception-v3 models as pre-training convolutional neural networks model, Remove the classification layer in model or classification layer is used as into feature descriptor, and add new classification layer;Then input picture is adjusted Size calculates whole by propagating input forward via network to adapt to the size (229 × 229) that convolutional neural networks are input into The activation of network.
Further, described Inception-v3 models, architecture includes 3 convolutional layers, is followed by a pond layer, 3 convolutional layers, 10 Inception blocks and one it is final be fully connected layer, totally 17 layers;Data are carried out by training network Transfer learning, from layer is fully connected activation is extracted, and each input can obtain the output of one 2048 dimension, is construed in sequence every The descriptor of individual frame.
Wherein, described image classification, the pin obtained after being migrated to mode using the convolutional neural networks model of pre-training Step image carries out transfer learning, and the data sequence for obtaining is processed via largest frames, average frame or sequence analysis, readjusts size Afterwards as the input of network, the ID classification results that the step image belongs to someone are finally exported, reach 90% or so identification Precision;This patent model is not limited to pressure sensor data, and other sensors data equally can be used.
Description of the drawings
Fig. 1 is a kind of system flow chart of the deep learning step detection method based on sensing data of the present invention.
Fig. 2 is the step image after a kind of migration based on the deep learning step detection method of sensing data of the present invention Schematic diagram.
Fig. 3 is a kind of biography of the maximum or average frame of deep learning step detection method based on sensing data of the present invention Pass schematic diagram.
Specific embodiment
It should be noted that in the case where not conflicting, the feature in embodiment and embodiment in the application can phase Mutually combine, below in conjunction with the accompanying drawings the present invention is described in further detail with specific embodiment.
Fig. 1 is a kind of system flow chart of the deep learning step detection method based on sensing data of the present invention.Mainly Including data input;Mode is migrated;Transfer learning;Image classification.
Wherein, described data input, choose by people walk on the quick matrix of pressure acquisition step data as gait Data set, the data set is made up of the step sample of 13 people;2-3 step is recorded in everyone each walking sequence, often People at least records 12 samples;Each walking sequence is the independent data sequence of the ID marks with a specific people, and ID is defined The class label of convolutional neural networks, altogether including 529 steps.
Wherein, described mode migration, including pretreatment, place normalization and image zooming-out;By the original number of sensor Linearly migrate as gray scale chromaticity diagram according to the time series of, i.e., 120 × 54 two-dimensional pressures mapping, wherein each pixel represents sense Know a little, brighter color corresponds to higher pressure;Complete step by pressure mapping frame Sequence composition, each frame correspondence pin The a certain moment of step;Split each step along time dimension and find the independent moment of each step;Other sensors number According to the thinking for being equally applicable above-mentioned mode migration.
Further, described pretreatment, by by the migration of each frame for binary frame and using adaptive threshold by Step is separated with ambient noise;For threshold value, the histogram that the pixel value Classified into groups number of frame is 10, and threshold value are determined For the central value of the next group of peak group.
Further, described place normalization, finds first the maximum boundary frame of all frames, and it surrounds each independent pin Step, it is ensured that all moment for belonging to same step are all surrounded by the bounding box;For same step, using the border of formed objects Frame capturing and extract all of moment, so as to cut incoherent part using bounding box.
Further, described image zooming-out, after fitting bounding box largest frames method, the method for average and sequence analysis are adopted Carry out image zooming-out;
Largest frames method, from the frame sequence of each sample largest frames are captured, and are migrated as corresponding image and use class ID Mark it;Concentrate from gait data and extract the image after 529 mode migrations altogether;
The method of average, to the sequence of single sample in all frames carry out average operation, and find the correspondence of average pixel value Image;Average frame carries the temporal information at all moment of step, contributes to setting up more effective characteristic set;
Sequence analysis, the institute using the frame sequence of sample is important and they are migrated into image;Each frame carries former Initial value, and granularities more more than above two methods are provided;
The classification results that test is obtained show that the precision highest reached using sequence analysis method reaches 90% or so.
Wherein, described transfer learning, using Inception-v3 models as pre-training convolutional neural networks model, Remove the classification layer in model or classification layer is used as into feature descriptor, and add new classification layer;Then input picture is adjusted Size calculates whole by propagating input forward via network to adapt to the size (229 × 229) that convolutional neural networks are input into The activation of network.
Further, described Inception-v3 models, architecture includes 3 convolutional layers, is followed by a pond layer, 3 convolutional layers, 10 Inception blocks and one it is final be fully connected layer, totally 17 layers;Data are carried out by training network Transfer learning, from layer is fully connected activation is extracted, and each input can obtain the output of one 2048 dimension, is construed in sequence every The descriptor of individual frame.
Wherein, described image classification, the pin obtained after being migrated to mode using the convolutional neural networks model of pre-training Step image carries out transfer learning, and the data sequence for obtaining is processed via largest frames, average frame or sequence analysis, readjusts size Afterwards as the input of network, the ID classification results that the step image belongs to someone are finally exported, reach 90% or so identification Precision;This patent model is not limited to pressure sensor data, and other sensors data equally can be used.
Fig. 2 is the step image after a kind of migration based on the deep learning step detection method of sensing data of the present invention Schematic diagram.Complete step is the sequence of these pressure mapping frames, and each frame is corresponding to a certain moment as shown in Fig. 2 (a) Step image.Largest frames are captured from the frame sequence of each sample, is migrated as corresponding image and is marked it with class ID, It is that each step extracts an image, a total of 529 such images in our data set.Sequence to single sample In all frames carry out averagely, and the correspondence image of average pixel value is found, shown in such as Fig. 2 (b).Average frame carries the institute of step There is the temporal information at moment, and contribute to setting up more effective characteristic set.
Fig. 3 is a kind of biography of the maximum or average frame of deep learning step detection method based on sensing data of the present invention Pass schematic diagram.Using the convolutional neural networks trained on very big data set, then in the number of targets that size is relatively small According to the upper further fine setting of collection.The convolutional neural networks of training in advance are used to finally be fully connected layer and using last hidden by removal Hide the activation of layer carries out the biography transfer of learning as the feature descriptor of input data set.Then, resulting feature descriptor For train classification models.Finally largest frames or average frame are obtained as the input of model via disaggregated model Treatment Analysis The ID results of the people of classification.
For those skilled in the art, the present invention is not restricted to the details of above-described embodiment, in the essence without departing substantially from the present invention In the case of god and scope, the present invention can be realized with other concrete forms.Additionally, those skilled in the art can be to this Bright to carry out various changes with modification without departing from the spirit and scope of the present invention, these are improved and modification also should be regarded as the present invention's Protection domain.Therefore, claims are intended to be construed to include preferred embodiment and fall into all changes of the scope of the invention More and modification.

Claims (10)

1. a kind of deep learning step detection method based on sensing data, it is characterised in that mainly including data input (1);Mode migrates (two);Transfer learning (three);Image classification (four).
2., based on a kind of deep learning step detection method based on sensing data described in claims 1, its feature exists In, using pressure sensor data, the data type explained visually is difficult, and it is not clear whether can visually dissolve Release;Sensor mode is moved into the vision territory of image format, and the depth convolutional neural networks using training in advance recognize two Dimension sensing data;The output of dimension sensor is moved to pressure distribution imaging, realizes that mode is migrated, obtain the image for migrating Data;Transfer learning is carried out to the view data for migrating using the convolutional neural networks of training in advance, detect so as to perform step, Identification mission.
3. based on the data input () described in claims 1, it is characterised in that selection is walked by people on the quick matrix of pressure The step data of acquisition are made up of as gait data collection, the data set the step sample of 13 people;Everyone each walking 2-3 step is recorded in sequence, everyone at least records 12 samples;Each walking sequence is the ID marks with a specific people Independent data sequence, ID defines the class label of convolutional neural networks, altogether including 529 steps.
4. based on mode migration (two) described in claims 1, it is characterised in that including pretreatment, place normalization and figure As extracting;The time series of the two-dimensional pressure mapping of the initial data of sensor, i.e., 120 × 54 is linearly migrated as gray color Coloured picture, wherein each pixel represent perception point, and brighter color corresponds to higher pressure;Complete step is by pressure mapping frame Sequence composition, a certain moment of each frame correspondence step;Split each step along time dimension and find each step The independent moment;Other sensors data are equally applicable the thinking of above-mentioned mode migration.
5. based on the pretreatment described in claims 4, it is characterised in that by by each frame migration is for binary frame and applies Adaptive threshold separates step with ambient noise;For threshold value, by the histogram that the pixel value Classified into groups number of frame is 10, And threshold value is confirmed as the central value of the next group of peak group.
6. based on the place normalization described in claims 4, it is characterised in that find the maximum boundary frame of all frames first, It surrounds each independent step, it is ensured that all moment for belonging to same step are all surrounded by the bounding box;For same step, make The all of moment is captured and extracted with the bounding box of formed objects, so as to cut incoherent part using bounding box.
7. based on the image zooming-out described in claims 4, it is characterised in that largest frames method is adopted after fitting bounding box, is put down Method and sequence analysis carry out image zooming-out;
Largest frames method, from the frame sequence of each sample largest frames are captured, and are migrated for corresponding image and with class ID and are marked It;Concentrate from gait data and extract the image after 529 mode migrations altogether;
The method of average, to the sequence of single sample in all frames carry out average operation, and find the correspondence image of average pixel value; Average frame carries the temporal information at all moment of step, contributes to setting up more effective characteristic set;
Sequence analysis, the institute using the frame sequence of sample is important and they are migrated into image;Each frame carries original value, And granularities more more than above two methods are provided;
The classification results that test is obtained show that the precision highest reached using sequence analysis method reaches 90% or so.
8. based on the transfer learning (three) described in claims 1, it is characterised in that using Inception-v3 models as pre- The convolutional neural networks model of training, removes the classification layer in model or classification layer is used as into feature descriptor, and adds new Classification layer;Then input picture size is adjusted to adapt to the size (229 × 229) of convolutional neural networks input, by via net Network is propagated be input into calculate the activation of whole network forward.
9. based on the Inception-v3 models described in claims 8, it is characterised in that architecture includes 3 convolutional layers, Be followed by a pond layer, 3 convolutional layers, 10 Inception blocks and one it is final be fully connected layer, totally 17 layers;By instruction Practice network transfer learning is carried out to data, from be fully connected layer extract activation, each input can obtain one 2048 dimension it is defeated Go out, be construed to the descriptor of each frame in sequence.
10. based on the image classification (four) described in claims 1, it is characterised in that using the convolutional neural networks of pre-training Model carries out transfer learning to the step image obtained after mode migration, processes via largest frames, average frame or sequence analysis The data sequence for arriving, readjusts after size as the input of network, finally exports ID point that the step image belongs to someone Class result, reaches 90% or so accuracy of identification;This patent model is not limited to pressure sensor data, and other sensors data are same Sample can be used.
CN201710052675.4A 2017-01-22 2017-01-22 Sensor data-based deep learning step detection method Withdrawn CN106650709A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710052675.4A CN106650709A (en) 2017-01-22 2017-01-22 Sensor data-based deep learning step detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710052675.4A CN106650709A (en) 2017-01-22 2017-01-22 Sensor data-based deep learning step detection method

Publications (1)

Publication Number Publication Date
CN106650709A true CN106650709A (en) 2017-05-10

Family

ID=58842438

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710052675.4A Withdrawn CN106650709A (en) 2017-01-22 2017-01-22 Sensor data-based deep learning step detection method

Country Status (1)

Country Link
CN (1) CN106650709A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108309303A (en) * 2017-12-26 2018-07-24 上海交通大学医学院附属第九人民医院 A kind of wearable freezing of gait intellectual monitoring and walk-aid equipment
CN108415568A (en) * 2018-02-28 2018-08-17 天津大学 The intelligent robot idea control method of complex network is migrated based on mode
CN109800796A (en) * 2018-12-29 2019-05-24 上海交通大学 Ship target recognition methods based on transfer learning
CN109919036A (en) * 2019-01-18 2019-06-21 南京理工大学 Worker's work posture classification method based on time-domain analysis depth network
CN110057522A (en) * 2019-04-12 2019-07-26 西北工业大学 Acceleration signal capture card sample frequency intelligent upgrade method based on deep learning
CN110236533A (en) * 2019-05-10 2019-09-17 杭州电子科技大学 Epileptic seizure prediction method based on the study of more deep neural network migration features
CN110826490A (en) * 2019-11-06 2020-02-21 杭州姿感科技有限公司 Track tracking method and device based on step classification
CN110869942A (en) * 2017-07-10 2020-03-06 通用电气公司 Self-feedback deep learning method and system
CN110892409A (en) * 2017-06-05 2020-03-17 西门子股份公司 Method and apparatus for analyzing images
CN111052129A (en) * 2017-07-28 2020-04-21 美国西门子医学诊断股份有限公司 Deep learning volumetric quantification method and apparatus
CN111220912A (en) * 2020-01-19 2020-06-02 重庆大学 Battery capacity attenuation track prediction method based on transplanted neural network
CN111615703A (en) * 2017-11-21 2020-09-01 祖克斯有限公司 Sensor data segmentation
CN111623797A (en) * 2020-06-10 2020-09-04 电子科技大学 Step number measuring method based on deep learning
CN111753877A (en) * 2020-05-19 2020-10-09 海克斯康制造智能技术(青岛)有限公司 Product quality detection method based on deep neural network transfer learning
CN112513882A (en) * 2018-06-08 2021-03-16 瑞典爱立信有限公司 Methods, devices and computer readable media related to detection of cell conditions in a wireless cellular network
CN112613430A (en) * 2020-12-28 2021-04-06 杭州电子科技大学 Gait recognition method based on deep transfer learning
CN112655001A (en) * 2018-09-07 2021-04-13 爱贝欧汽车系统有限公司 Method and device for classifying objects
CN113138366A (en) * 2020-01-17 2021-07-20 中国科学院声学研究所 Single-vector hydrophone orientation estimation method based on deep migration learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MONIT SHAH SINGH等: "Transforming Sensor Data to the Image Domain for Deep Learning - an Application to Footstep Detection", 《ARXIV:1701.01077V1》 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110892409B (en) * 2017-06-05 2023-09-22 西门子股份公司 Method and device for analyzing images
CN110892409A (en) * 2017-06-05 2020-03-17 西门子股份公司 Method and apparatus for analyzing images
CN110869942B (en) * 2017-07-10 2023-05-09 通用电气公司 Self-feed deep learning method and system
CN110869942A (en) * 2017-07-10 2020-03-06 通用电气公司 Self-feedback deep learning method and system
CN111052129A (en) * 2017-07-28 2020-04-21 美国西门子医学诊断股份有限公司 Deep learning volumetric quantification method and apparatus
CN111052129B (en) * 2017-07-28 2024-03-08 美国西门子医学诊断股份有限公司 Deep learning volume quantification method and apparatus
US11798169B2 (en) 2017-11-21 2023-10-24 Zoox, Inc. Sensor data segmentation
CN111615703A (en) * 2017-11-21 2020-09-01 祖克斯有限公司 Sensor data segmentation
CN108309303A (en) * 2017-12-26 2018-07-24 上海交通大学医学院附属第九人民医院 A kind of wearable freezing of gait intellectual monitoring and walk-aid equipment
CN108415568B (en) * 2018-02-28 2020-12-29 天津大学 Robot intelligent idea control method based on modal migration complex network
CN108415568A (en) * 2018-02-28 2018-08-17 天津大学 The intelligent robot idea control method of complex network is migrated based on mode
CN112513882A (en) * 2018-06-08 2021-03-16 瑞典爱立信有限公司 Methods, devices and computer readable media related to detection of cell conditions in a wireless cellular network
CN112513882B (en) * 2018-06-08 2024-09-24 瑞典爱立信有限公司 Methods, devices and computer readable media related to detection of cell conditions in a wireless cellular network
CN112655001A (en) * 2018-09-07 2021-04-13 爱贝欧汽车系统有限公司 Method and device for classifying objects
CN109800796A (en) * 2018-12-29 2019-05-24 上海交通大学 Ship target recognition methods based on transfer learning
CN109919036A (en) * 2019-01-18 2019-06-21 南京理工大学 Worker's work posture classification method based on time-domain analysis depth network
CN110057522A (en) * 2019-04-12 2019-07-26 西北工业大学 Acceleration signal capture card sample frequency intelligent upgrade method based on deep learning
CN110236533A (en) * 2019-05-10 2019-09-17 杭州电子科技大学 Epileptic seizure prediction method based on the study of more deep neural network migration features
CN110826490B (en) * 2019-11-06 2022-10-04 杭州姿感科技有限公司 Track tracking method and device based on step classification
CN110826490A (en) * 2019-11-06 2020-02-21 杭州姿感科技有限公司 Track tracking method and device based on step classification
CN113138366A (en) * 2020-01-17 2021-07-20 中国科学院声学研究所 Single-vector hydrophone orientation estimation method based on deep migration learning
CN113138366B (en) * 2020-01-17 2022-12-06 中国科学院声学研究所 Single-vector hydrophone orientation estimation method based on deep migration learning
CN111220912B (en) * 2020-01-19 2022-03-29 重庆大学 Battery capacity attenuation track prediction method based on transplanted neural network
CN111220912A (en) * 2020-01-19 2020-06-02 重庆大学 Battery capacity attenuation track prediction method based on transplanted neural network
CN111753877A (en) * 2020-05-19 2020-10-09 海克斯康制造智能技术(青岛)有限公司 Product quality detection method based on deep neural network transfer learning
CN111753877B (en) * 2020-05-19 2024-03-05 海克斯康制造智能技术(青岛)有限公司 Product quality detection method based on deep neural network migration learning
CN111623797B (en) * 2020-06-10 2022-05-20 电子科技大学 Step number measuring method based on deep learning
CN111623797A (en) * 2020-06-10 2020-09-04 电子科技大学 Step number measuring method based on deep learning
CN112613430A (en) * 2020-12-28 2021-04-06 杭州电子科技大学 Gait recognition method based on deep transfer learning
CN112613430B (en) * 2020-12-28 2024-02-13 杭州电子科技大学 Gait recognition method based on deep migration learning

Similar Documents

Publication Publication Date Title
CN106650709A (en) Sensor data-based deep learning step detection method
CN107977671B (en) Tongue picture classification method based on multitask convolutional neural network
CN109583342B (en) Human face living body detection method based on transfer learning
CN107330889B (en) A kind of Chinese medicine tongue color coating colour automatic analysis method based on convolutional neural networks
Jahedsaravani et al. An image segmentation algorithm for measurement of flotation froth bubble size distributions
Al Bashish et al. Detection and classification of leaf diseases using K-means-based segmentation and
Raut et al. Plant disease detection in image processing using MATLAB
JP2022529557A (en) Medical image segmentation methods, medical image segmentation devices, electronic devices and computer programs
CN109815785A (en) A kind of face Emotion identification method based on double-current convolutional neural networks
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN106023145A (en) Remote sensing image segmentation and identification method based on superpixel marking
CN104764744A (en) Visual inspection device and method for inspecting freshness of poultry eggs
Gadade et al. Tomato leaf disease diagnosis and severity measurement
CN102509085A (en) Pig walking posture identification system and method based on outline invariant moment features
CN103778435A (en) Pedestrian fast detection method based on videos
CN107066916A (en) Scene Semantics dividing method based on deconvolution neutral net
CN104751175B (en) SAR image multiclass mark scene classification method based on Incremental support vector machine
CN105913463B (en) A kind of texture based on location-prior-color characteristic overall situation conspicuousness detection method
CN107154044B (en) Chinese food image segmentation method
Ouyang et al. The research of the strawberry disease identification based on image processing and pattern recognition
Wang et al. A hybrid method for the segmentation of a ferrograph image using marker-controlled watershed and grey clustering
CN110276378A (en) The improved method that example is divided based on unmanned technology
CN106570515A (en) Method and system for treating medical images
Niu et al. Automatic localization of optic disc based on deep learning in fundus images
CN114387505A (en) Hyperspectral and laser radar multi-modal remote sensing data classification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20170510

WW01 Invention patent application withdrawn after publication