CN107818343B - Counting method and device - Google Patents

Counting method and device Download PDF

Info

Publication number
CN107818343B
CN107818343B CN201711037201.9A CN201711037201A CN107818343B CN 107818343 B CN107818343 B CN 107818343B CN 201711037201 A CN201711037201 A CN 201711037201A CN 107818343 B CN107818343 B CN 107818343B
Authority
CN
China
Prior art keywords
neural network
counting
image
marked
retraining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711037201.9A
Other languages
Chinese (zh)
Other versions
CN107818343A (en
Inventor
于涌
陈云霁
陈天石
刘少礼
郭崎
杜子东
刘道福
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Computing Technology of CAS
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CN201711467274.1A priority Critical patent/CN108052984B/en
Priority to CN201711037201.9A priority patent/CN107818343B/en
Publication of CN107818343A publication Critical patent/CN107818343A/en
Application granted granted Critical
Publication of CN107818343B publication Critical patent/CN107818343B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The present disclosure provides a counting method comprising: pre-training a deep neural network; retraining the depth neural network after pre-training by using the labeled image to obtain a two-classification target detection neural network; and counting target objects contained in the image to be counted by utilizing the two classified target detection neural networks. The present disclosure also provides a counting device. The counting method and the device are wide in application range, can count any counting object, saves labor and provides higher universality.

Description

Counting method and device
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to a counting method and a counting device.
Background
The existing counting method is mainly manual counting, needs to consume a large amount of labor cost, and wastes time and labor. In addition, although some counting methods using a neural network are available, there is no general counting method, but the application field is too narrow, and for example, only the number of cells in a biomedical image or only the number of people can be counted.
Disclosure of Invention
Technical problem to be solved
In order to solve or at least partially alleviate the technical problems, the present disclosure provides a counting method and apparatus, which is an image counting method and apparatus based on a deep neural network, and a user completes a wide counting task by autonomously configuring a counting object. During specific counting, the counting problem is converted into a first-class and second-class problem, namely counting target objects designated by a user in a first image are classified into one class, and the rest objects are classified into one class, and then counting and identifying the counting target objects designated by the user in the one class to obtain the total number.
(II) technical scheme
According to an aspect of the present disclosure, there is provided a counting method including: pre-training a deep neural network; retraining the depth neural network after pre-training by using the labeled image to obtain a two-classification target detection neural network; and counting target objects contained in the image to be counted by utilizing the two classified target detection neural networks.
In some embodiments, the marker image includes the counting target object.
In some embodiments, the step of retraining the pre-trained deep neural network with the labeled image to obtain a two-class target detection neural network includes: marking all counting target objects in at least one image, and acquiring a marked image: marking all counting target objects in the marked image in a marking mode, and dividing all objects in the marked image into two types, namely, a marked target object and other objects without marks, thereby obtaining the marked image; and inputting the marked image into the pre-trained deep neural network, retraining, and repeating the retraining step until the output error of the neural network is less than an error threshold value to obtain a first-second-class target detection neural network.
In some embodiments, the counting method further comprises: inputting unmarked images to be counted into the two classified target detection neural networks to obtain coordinate position information and confidence accuracy scores of the target objects; counting according to coordinate position information or confidence accuracy scores: and setting an accuracy score threshold, and if the accuracy score is greater than the accuracy score threshold, judging that the target object exists at the position.
In some embodiments, the pre-trained deep neural network is a multi-class neural network, and the pre-trained deep neural network is retrained by using the labeled image, that is, the multi-class neural network is converted into a two-class target detection neural network through transfer learning.
In some embodiments, the counting method further comprises: resetting the two classified target detection neural networks; replacing the marked image, and retraining the reset neural network by using the replaced marked image; counting the replaced counting target objects contained in the images to be counted by using the neural network retrained after the marked images are replaced; wherein the replaced marker image contains the replaced counting target object.
In some embodiments, the deep neural network is FAST R-CNN or YOLO.
According to another aspect of the present disclosure, there is also provided a counting apparatus including: the preprocessing module is used for pre-training a deep neural network; the processing module is used for retraining the depth neural network after the pre-training by utilizing the marked image to obtain a two-classification target detection neural network; and the counting module is used for counting target objects contained in the image to be counted by utilizing the two classified target detection neural networks.
In some embodiments, the marker image includes the counting target object.
In some embodiments, the counting device further comprises: a reset module for resetting the second class of target detection neural network; and a replacement module for replacing the marker image; the processing module is further used for retraining the reset neural network by using the replaced marked image; the counting module is also used for counting the replaced counting target objects contained in the images to be counted by utilizing the neural network retrained after the marked images are replaced; the replaced marker image contains the replaced counting target object.
(III) advantageous effects
According to the technical scheme, the counting method and the counting device have at least one of the following beneficial effects:
(1) the counting method and the device have wide application range and universality, can count aiming at any counting object, and can count one type of target object by resetting and retraining the neural network after counting the other type of target object; compared with the traditional method, the intelligent counting method and the intelligent counting device can save labor and provide higher universality.
(2) When the replaced counting target object is counted, a large amount of data of the traditional neural network is trained, and the counting method and the counting device can reset the counting object by only one marked picture through transfer learning in the retraining process.
(3) According to the counting method and the counting device, the counting problem is mathematically abstracted into a binary problem through a retraining and transfer learning method, so that the defects that a common neural network counter cannot be configured and only has the counting function of a certain object are overcome.
(4) According to the counting method and the counting device, the accuracy of counting can be greatly improved by using the deep neural network method. The adoption of the FAST R-CNN can greatly improve the counting accuracy, and meanwhile, the BBOX operation of the FAST R-CNN can directly output the position information, thereby facilitating the later counting of the total number of a certain type of target objects.
Drawings
Fig. 1 is a flow chart of a counting method of the present disclosure.
Fig. 2 is a schematic illustration of a marker image of the present disclosure.
FIG. 3 is a schematic diagram of the FAST R-CNN network and RPN network structure according to the present disclosure.
Fig. 4 is a block diagram of a counting device according to the present disclosure.
Detailed Description
For the purpose of promoting a better understanding of the objects, aspects and advantages of the present disclosure, reference is made to the following detailed description taken in conjunction with the accompanying drawings.
It should be noted that in the drawings or description, the same drawing reference numerals are used for similar or identical parts. Implementations not depicted or described in the drawings are of a form known to those of ordinary skill in the art. Additionally, while exemplifications of parameters including particular values may be provided herein, it is to be understood that the parameters need not be exactly equal to the respective values, but may be approximated to the respective values within acceptable error margins or design constraints. In addition, directional terms such as "upper", "lower", "front", "rear", "left", "right", and the like, referred to in the following embodiments, are directions only referring to the drawings. Accordingly, the directional terminology used is intended to be in the nature of words of description rather than of limitation.
The counting method and the device are based on the image counting method and the device of the deep neural network, and a user can complete a wide counting task by using the counting method and the device of the deep neural network through self-configuration of a counting object. In terms of specific counting, the present disclosure regards the counting problem class as a two-class problem, i.e., one class is a user-specified object (target object to be counted) in one image, and one class is the remaining objects (objects other than the target object to be counted) in the image. After the objects are classified, a type of objects (target objects to be counted, hereinafter referred to as counting target objects) designated by a user is subjected to statistical recognition to obtain a total number.
Specifically, as shown in fig. 1, the counting method of the present disclosure mainly includes the following steps:
pre-training a deep neural network;
retraining the depth neural network after pre-training by using the labeled image to obtain a two-classification target detection neural network;
and counting target objects contained in the image to be counted by using the two-classification target detection neural network (the retrained neural network can be repeatedly used, and the number of the pictures to be counted can be any one or more).
If counting of other types of counting target objects is required, resetting the two-classification target detection neural network; replacing the marked image, and retraining the reset neural network by using the replaced marked image; counting the replaced counting target objects contained in the images to be counted by utilizing the neural network retrained after replacing the marked images; the replaced marker image includes the replaced counting target object (i.e., a new counting target object, also referred to as a replaced counting target object, with respect to the counting target object before resetting).
More specifically, the counting method of the present disclosure pre-stores a trained target detection neural network such as FAST R-CNN, YOLO, etc. (i.e., pre-stores a pre-trained neural network). When counting a same object (which may be a cell, a human face, an airplane, an automobile, etc.) in a plurality of pictures, a user needs to mark at least one of the plurality of pictures, mark all objects to be recognized in the picture, and use the marked picture as a retraining marked picture (also called a marked image). The method retrains a pre-stored target detection neural network by using the retraining marker picture, and retrains the target detection neural network into neural networks of two classification user marker objects and other objects. And then, inputting other pictures to be counted into the retrained two-class neural network to generate the total number information of the objects marked by the user. Therefore, the counting work of any object in the picture is automated, and the method is suitable for various different counting fields such as cell counting, object counting in satellite pictures and the like.
In a specific embodiment of the present disclosure, the counting method includes:
step 1, pre-training a deep neural network; specifically, an image detection classical network such as fastrcn (fast convolutional neural network based on region), YOLO (neural network only at one eye) or the like is selected as an initial detection network. In addition, one copy of the initial detection network (the pre-trained deep neural network) can be copied for later resetting in the step of resetting the neural network, and the other copy of the neural network can be used for retraining the transfer learning in the following steps.
Step 2, providing a plurality of images, in this embodiment, providing two images, marking one of the two images to obtain a marked image, and marking the other image as an unmarked image without marking; the marked images, as well as the unmarked images to be counted, are preprocessed. Through the information marked by the user, the method and the system can be realized for any user-defined object, and the target which can be matched by the user can be counted. The preprocessing method comprises the methods of image negation, contrast stretching and the like.
Step 3, retraining the pre-trained deep neural network by using the marked image; specifically, a method of supervised learning is used for a marked image provided by a user, and in the retraining process, the marked information of the user and the result of the original image after passing through a neural network are used for calculating LSE (least square error) S ═ Σ (y)i-kFi)2Or AAE (absolute error) S ═ Σ | yi-kFiI is used as the input of a directional propagation algorithm (BP algorithm), is transmitted back to the network through a back propagation algorithm, the weight of the network is updated, at the moment, the user mark information is displayed only in two categories, and the mark y is obtained when the error S is solvediDifferent from multiple categories in general object detection algorithms; wherein, yiPart of the coded part of the label information, kF, in an image containing label information input by a useriThe result obtained after the input image passes through the neural network. Here, k is a parameter of the entire neural network, FiThe image portion of the image containing the label information input by the user. For example, in fig. 2, a picture of a bottle having three marked positions (counting target objects are bottles), and an image portion (F)i) Namely an original image of three bottles, and the mark information coding part is an image (y) with 0 in the positions of the three bottles and 1 in the rest positionsi). The process uses a pre-trained target detection neural network for retraining, and generates a two-classification target detection neural network by a multi-classification neural network (initial detection network) for original target detection through transfer learning. It should be noted here that at the beginning of retraining, the original target-detecting neural network may be copied and stored for later use in resetting the method. The step 3 specifically comprises the following substeps:
step 3.1, marking: the user marks a picture (image) of the object to be recognized. For one picture the user wants to identify a number of objects. There may be one or more of the objects in the photograph, each of which is framed using a box. If it is necessary to find how many bottles are in the picture (counting target objects are bottles), the bottles in the picture are boxed as shown in fig. 2. The neural network is retrained using the picture, and then the bottles in any other picture or pictures can be identified and the total number of bottles is output. If other types of objects need to be identified, the neural network is reset, namely, the neural network is deleted, and reserved backup is copied.
Step 3.2, retraining: the neural network is retrained with user-labeled photographs (images). The marked picture is sent to a target detection neural network prepared in advance, such as FAST R-CNN. And modifying the objective function in the retraining process. Wherein, the structure of FAST R-CNN is shown in FIG. 3. In the retraining process, the classification score and the bbox regression position are different from FAST R-CNN of the traditional multi-classification problem, and the method converts the classification score and the bbox regression position into a two-classification problem and compares the two-classification problem with coordinate information of a user-marked square frame. One class in the box and another class outside the box. And the obtained LSE and AAE information is propagated backwards through a back propagation algorithm (BP algorithm) to enable the FAST R-CNN neural network and update the weight.
And 3.3, circularly and repeatedly executing the step 3.2 until the result error generated by the FAST R-CNN neural network is smaller than the threshold value, and finishing the circulation. And obtaining the usable intelligent counting neural network for counting the number of bottles in any photo, and using the neural network repeatedly.
And 4, inputting the unmarked image to be counted into the network trained in the step 3 by using the retrained deep network model. The bbox part of the algorithm flow in the upper graph yields coordinate position information for all targets in the graph (bottles, i.e. objects marked by the user when used for retraining in step 3), and a confidence accuracy score for the accuracy of the classification is obtained at the classification score. When the accuracy score is higher than a certain threshold value, the algorithm judges that a target detection object exists at the position. And counting the total number according to the coordinate information in the bbox to obtain the final result, i.e. several selected objects in the picture. This has the advantage that the position information output by the bbox operation can be used directly for counting statistics without the need for additional methods of counting.
And 5, finishing counting one photo (also called image) after the step 4 is finished, and executing the step 4 for other photos to be counted, for example, photos of other target objects to be counted, namely bottles. Step 4, one picture is identified, and the next picture is repeatedly executed in step 4 (namely, the retrained neural network can be repeatedly used, and the number of the images to be counted can be any one or more).
And 6, resetting the device after all the photos to be counted are counted, namely deleting the retrained neural network and copying an initial neural network. When the user needs to count next time, starting from step 2; for example, when counting a first type of counting target object (for example, a bottle) and counting a second type of counting target object (for example, a hat), the neural network may be reset, at least one image including the second type of counting target object is marked, and the copied initial neural network is retrained by using the marked image including the second type of counting target object; after retraining, counting the second type of counting target objects contained in the image to be counted.
In addition, the present disclosure also provides a counting apparatus, as shown in fig. 4, the counting apparatus including:
the preprocessing module is used for pre-training a deep neural network;
the processing module is used for retraining the depth neural network after the pre-training by utilizing the marked image to obtain a two-classification target detection neural network; and
and the counting module is used for counting target objects contained in the image to be counted by utilizing the two classified target detection neural networks.
Wherein the marker image contains the counting target object.
Further, the counting device may further include:
a reset module for resetting the second class of target detection neural network; and
a replacement module for replacing the marker image;
the processing module is further used for retraining the reset neural network by using the replaced marked image; the counting module is also used for counting the replaced counting target objects contained in the images to be counted by utilizing the neural network retrained after the marked images are replaced; the replaced marker image contains the replaced counting target object.
In summary, the intelligent counting method and apparatus provided by the present disclosure are a general counting method and apparatus, which can count different fields or objects according to the user specification, and can be applied to various fields requiring counting, such as cell counting, people counting, and object counting.
The above-mentioned embodiments are intended to illustrate the objects, aspects and advantages of the present disclosure in further detail, and it should be understood that the above-mentioned embodiments are only illustrative of the present disclosure and are not intended to limit the present disclosure, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present disclosure should be included in the scope of the present disclosure.

Claims (7)

1. A counting method, comprising:
pre-training a deep neural network;
retraining the depth neural network after pre-training by using the labeled image to obtain a two-classification target detection neural network;
counting target objects contained in the image to be counted by utilizing the two classified target detection neural networks; and
resetting the two classified target detection neural networks;
replacing the marked image, and retraining the reset neural network by using the replaced marked image;
wherein the marker image contains the counting target object;
the retraining of the pre-trained deep neural network by using the labeled image to obtain a two-classification target detection neural network comprises the following steps:
marking all counting target objects in at least one image, and acquiring a marked image: marking all counting target objects in the marked image in a marking mode, and dividing all objects in the marked image into two types, wherein one type is a marked target object, and the other type is an object without a mark, so as to obtain a marked image;
and inputting the marked image into the pre-trained deep neural network, retraining, and repeating the retraining step until the output error of the neural network is less than an error threshold value to obtain a first-second-class target detection neural network.
2. The counting method of claim 1, further comprising:
inputting unmarked images to be counted into the two classified target detection neural networks to obtain coordinate position information and confidence accuracy scores of the target objects;
counting according to coordinate position information or confidence accuracy scores: and setting an accuracy score threshold, and if the accuracy score is greater than the accuracy score threshold, judging that the target object exists in the position.
3. The counting method according to claim 1, wherein the pre-trained deep neural network is a multi-class neural network, and the pre-trained deep neural network is retrained by using labeled images, i.e. the multi-class neural network is converted into a two-class target detection neural network through transfer learning.
4. The counting method of claim 1, further comprising:
counting the replaced counting target objects contained in the images to be counted by using the neural network retrained after the marked images are replaced; wherein the replaced marker image contains the replaced counting target object.
5. The counting method of claim 1, wherein the deep neural network is FASTR-CNN or YOLO.
6. A counting device, comprising:
the preprocessing module is used for pre-training a deep neural network;
the processing module is used for retraining the depth neural network after the pre-training by utilizing the marked image to obtain a two-classification target detection neural network;
the counting module is used for counting target objects contained in the image to be counted by utilizing the two classified target detection neural networks;
a reset module for resetting the second class of target detection neural network; and
a replacement module for replacing the marker image;
wherein the marked image in the processing module contains the counting target object;
the processing module is used for retraining the pre-trained deep neural network by using the labeled image to obtain a two-classification target detection neural network, and comprises the following steps:
marking all counting target objects in at least one image, and acquiring a marked image: marking all counting target objects in the marked image in a marking mode, and dividing all objects in the marked image into two types, wherein one type is a marked target object, and the other type is an object without a mark, so as to obtain a marked image;
and inputting the marked image into the pre-trained deep neural network, retraining, and repeating the retraining step until the output error of the neural network is less than an error threshold value to obtain a first-second-class target detection neural network.
7. The counting device according to claim 6,
the processing module is further used for retraining the reset neural network by using the replaced marked image; the counting module is also used for counting the replaced counting target objects contained in the images to be counted by utilizing the neural network retrained after the marked images are replaced; the replaced marker image contains the replaced counting target object.
CN201711037201.9A 2017-10-30 2017-10-30 Counting method and device Active CN107818343B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201711467274.1A CN108052984B (en) 2017-10-30 2017-10-30 Method of counting and device
CN201711037201.9A CN107818343B (en) 2017-10-30 2017-10-30 Counting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711037201.9A CN107818343B (en) 2017-10-30 2017-10-30 Counting method and device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201711467274.1A Division CN108052984B (en) 2017-10-30 2017-10-30 Method of counting and device

Publications (2)

Publication Number Publication Date
CN107818343A CN107818343A (en) 2018-03-20
CN107818343B true CN107818343B (en) 2021-01-08

Family

ID=61603522

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201711467274.1A Active CN108052984B (en) 2017-10-30 2017-10-30 Method of counting and device
CN201711037201.9A Active CN107818343B (en) 2017-10-30 2017-10-30 Counting method and device

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201711467274.1A Active CN108052984B (en) 2017-10-30 2017-10-30 Method of counting and device

Country Status (1)

Country Link
CN (2) CN108052984B (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108898174A (en) * 2018-06-25 2018-11-27 Oppo(重庆)智能科技有限公司 A kind of contextual data acquisition method, contextual data acquisition device and electronic equipment
CN109117741A (en) * 2018-07-20 2019-01-01 苏州中德宏泰电子科技股份有限公司 Offline object identifying method and device to be detected
CN109166100A (en) * 2018-07-24 2019-01-08 中南大学 Multi-task learning method for cell count based on convolutional neural networks
CN109472291A (en) * 2018-10-11 2019-03-15 浙江工业大学 A kind of demographics classification method based on DNN algorithm
CN110222562A (en) * 2019-04-26 2019-09-10 昆明理工大学 A kind of method for detecting human face based on Fast R-CNN
CN110472552A (en) * 2019-08-09 2019-11-19 杭州义顺科技有限公司 The video material object method of counting using camera based on image object detection technique
CN110505498B (en) * 2019-09-03 2021-04-02 腾讯科技(深圳)有限公司 Video processing method, video playing method, video processing device, video playing device and computer readable medium
CN111242002B (en) * 2020-01-10 2022-12-23 上海大学 Shared bicycle standardized parking judgment method based on computer vision
US11393182B2 (en) 2020-05-29 2022-07-19 X Development Llc Data band selection using machine learning
US11606507B1 (en) 2020-08-28 2023-03-14 X Development Llc Automated lens adjustment for hyperspectral imaging
US11651602B1 (en) 2020-09-30 2023-05-16 X Development Llc Machine learning classification based on separate processing of multiple views
CN112767349B (en) * 2021-01-18 2024-05-03 桂林优利特医疗电子有限公司 Reticulocyte identification method and system
CN113313692B (en) * 2021-06-03 2023-04-25 广西大学 Automatic banana young plant identification and counting method based on aerial visible light image
CN113409266A (en) * 2021-06-17 2021-09-17 陕西科技大学 Method and system for detecting and counting carborundum particles
US11995842B2 (en) 2021-07-22 2024-05-28 X Development Llc Segmentation to improve chemical analysis

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106384345A (en) * 2016-08-31 2017-02-08 上海交通大学 RCNN based image detecting and flow calculating method
CN106407946A (en) * 2016-09-29 2017-02-15 北京市商汤科技开发有限公司 Cross-line counting method, deep neural network training method, devices and electronic apparatus
CN106960195A (en) * 2017-03-27 2017-07-18 深圳市丰巨泰科电子有限公司 A kind of people counting method and device based on deep learning
CN106991439A (en) * 2017-03-28 2017-07-28 南京天数信息科技有限公司 Image-recognizing method based on deep learning and transfer learning

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8379961B2 (en) * 2008-07-03 2013-02-19 Nec Laboratories America, Inc. Mitotic figure detector and counter system and method for detecting and counting mitotic figures
CN101794396B (en) * 2010-03-25 2012-12-26 西安电子科技大学 System and method for recognizing remote sensing image target based on migration network learning
CN105844234B (en) * 2016-03-21 2020-07-31 商汤集团有限公司 Method and equipment for counting people based on head and shoulder detection
CN107169556A (en) * 2017-05-15 2017-09-15 电子科技大学 stem cell automatic counting method based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106384345A (en) * 2016-08-31 2017-02-08 上海交通大学 RCNN based image detecting and flow calculating method
CN106407946A (en) * 2016-09-29 2017-02-15 北京市商汤科技开发有限公司 Cross-line counting method, deep neural network training method, devices and electronic apparatus
CN106960195A (en) * 2017-03-27 2017-07-18 深圳市丰巨泰科电子有限公司 A kind of people counting method and device based on deep learning
CN106991439A (en) * 2017-03-28 2017-07-28 南京天数信息科技有限公司 Image-recognizing method based on deep learning and transfer learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
《Counting Objects with Faster RCNN》;Krzysztof Grajek;《softwaremill.com/counting-objects-with-faster-rcnn/》;20170602;全文 *

Also Published As

Publication number Publication date
CN107818343A (en) 2018-03-20
CN108052984B (en) 2019-11-08
CN108052984A (en) 2018-05-18

Similar Documents

Publication Publication Date Title
CN107818343B (en) Counting method and device
CN109815826B (en) Method and device for generating face attribute model
CN111898547B (en) Training method, device, equipment and storage medium of face recognition model
CN111079639B (en) Method, device, equipment and storage medium for constructing garbage image classification model
Kae et al. Augmenting CRFs with Boltzmann machine shape priors for image labeling
EP3660733A1 (en) Method and system for information extraction from document images using conversational interface and database querying
EP3156944A1 (en) Scene labeling of rgb-d data with interactive option
CN109376796A (en) Image classification method based on active semi-supervised learning
WO2022068195A1 (en) Cross-modal data processing method and device, storage medium and electronic device
WO2020228515A1 (en) Fake face recognition method, apparatus and computer-readable storage medium
CN105809123A (en) Face detecting method and device
CN110378366A (en) A kind of cross-domain image classification method based on coupling knowledge migration
CN107316059B (en) Learner gesture recognition method
CN108764242A (en) Off-line Chinese Character discrimination body recognition methods based on deep layer convolutional neural networks
CN112037222B (en) Automatic updating method and system of neural network model
CN113128287A (en) Method and system for training cross-domain facial expression recognition model and facial expression recognition
US11093800B2 (en) Method and device for identifying object and computer readable storage medium
TWI780567B (en) Object re-recognition method, storage medium and computer equipment
CN109360179A (en) A kind of image interfusion method, device and readable storage medium storing program for executing
CN114299567B (en) Model training method, living body detection method, electronic device, and storage medium
CN110163206B (en) License plate recognition method, system, storage medium and device
CN116612339B (en) Construction device and grading device of nuclear cataract image grading model
CN112906829B (en) Method and device for constructing digital recognition model based on Mnist data set
CN111950482B (en) Triplet acquisition method and device based on video learning and text learning
WO2022066133A1 (en) Meta tag generation method for learning from dirty tags

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant