CN112001884A - Training method, counting method, equipment and storage medium of quantity statistical model - Google Patents

Training method, counting method, equipment and storage medium of quantity statistical model Download PDF

Info

Publication number
CN112001884A
CN112001884A CN202010673863.0A CN202010673863A CN112001884A CN 112001884 A CN112001884 A CN 112001884A CN 202010673863 A CN202010673863 A CN 202010673863A CN 112001884 A CN112001884 A CN 112001884A
Authority
CN
China
Prior art keywords
sample
statistical model
training
target
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010673863.0A
Other languages
Chinese (zh)
Inventor
赵蕾
盛玉庭
孙海涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202010673863.0A priority Critical patent/CN112001884A/en
Publication of CN112001884A publication Critical patent/CN112001884A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a training method, a counting method, equipment and a storage medium of a quantity statistical model. The training method of the quantity statistical model comprises the following steps: acquiring sample images, and labeling sample objects in the sample images one by one; extracting sample image features of the sample image; extracting a sample region in the sample image feature that includes the sample object; searching the number of connected domains in the sample area; and training the quantity statistical model according to the direction of one-to-one correspondence of each sample object and the connected domain. The quantity statistical model trained by the training method can be used for quantity statistics, the quantity of the sample objects is determined according to the positions of the sample objects by the trained quantity statistical model, automatic quantity statistics is achieved, the method is also suitable for the sample object motion and the dense scenes, high accuracy can be still kept, and the applicable scenes are wider.

Description

Training method, counting method, equipment and storage medium of quantity statistical model
Technical Field
The application belongs to the technical field of intelligent breeding, and particularly relates to a training method, a counting method, equipment and a storage medium of a quantity statistical model.
Background
Animal husbandry is one of the important sources of rural economic income in China. With the gradual development of science and technology and economy, the animal husbandry in China gradually enters large-scale and intelligentization. Due to continuous expansion of the farm, the number of the breeding is gradually increased, and the number statistics becomes a difficult problem. Most of the existing farms rely on manual counting, and because the activities of livestock and poultry are frequent and the randomness is high, the large difficulty is brought to the counting, so that the manual counting mode wastes labor cost and is difficult to obtain an accurate result, and an automatic and accurate counting method is urgently needed.
Disclosure of Invention
The application provides a training method, a counting method, equipment and a storage medium of a quantity statistical model, which are used for solving the problem that the quantity statistics of livestock and poultry in a farm is difficult.
In order to solve the technical problem, the application adopts a technical scheme that: a method of training a quantitative statistical model, comprising: acquiring sample images, and labeling sample objects in the sample images one by one; extracting sample image features of the sample image; extracting a sample region in the sample image feature that includes the sample object; searching the number of connected domains in the sample area; and training the quantity statistical model according to the direction of one-to-one correspondence of each sample object and the connected domain.
According to an embodiment of the present application, the labeling the sample objects in the sample image one by one includes: at least one counting area is defined in the sample image, and the sample objects in the counting area are labeled one by one.
According to an embodiment of the present application, the extracting a sample region containing the sample object in the sample image feature comprises: calculating the probability that each pixel point in the sample image characteristics belongs to the target sample; and dividing the pixel points with the probability greater than a preset threshold value into the sample areas.
According to an embodiment of the present application, the training the quantitative statistical model in the direction in which each of the sample objects corresponds to the connected component domain one to one includes: and training the quantity statistical model by using a loss function so that each sample object corresponds to the connected domain one by one, wherein the loss function comprises an image-level loss function, a pixel-level loss function, a segmentation loss function and a false positive loss function.
According to an embodiment of the present application, the sample object includes at least one of a human, poultry, and livestock.
In order to solve the technical problem, the application adopts a technical scheme that: a counting method based on a quantitative statistical model, comprising: acquiring a target image; extracting target image features of the target image; extracting a target area containing a target object in the target image characteristics; finding the number of connected domains in the target area; and taking the number of the connected domains as the number of the target objects.
According to an embodiment of the application, the target object comprises a person, poultry and/or livestock.
According to an embodiment of the present application, the quantitative statistical model is obtained by training any of the above training methods.
In order to solve the above technical problem, the present application adopts another technical solution: an electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement any of the training methods or any of the counting methods described above.
In order to solve the above technical problem, the present application adopts another technical solution: a computer-readable storage medium, on which program data are stored, which program data, when being executed by a processor, implement any of the training methods or any of the counting methods described above.
The beneficial effect of this application is: the quantity statistical model trained by the training method determines the quantity of the sample objects according to the positions of the sample objects, and automatic quantity statistics is achieved. The same scheme can be used for multiplexing different application scenes of the same type of sample objects, and construction cost is saved. And the ear tag of the RFID chip is not needed, so that the livestock is not wounded, and meanwhile, the cost is reduced. In addition, the quantity statistical model trained by the method can obtain more accurate and robust features, so that the quantity counting is more accurate, meanwhile, the method is also suitable for the sample object motion and the dense scene, the higher accuracy can be still kept, and the applicable scene is wider.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings can be obtained by those skilled in the art without inventive efforts, wherein:
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a method for training a statistical quantity model according to the present application;
FIG. 2 is a schematic flow chart diagram illustrating an embodiment of a counting method based on a quantity statistical model according to the present application;
FIG. 3 is a block diagram of an embodiment of a device for training a statistical quantity model according to the present application;
FIG. 4 is a block diagram of an embodiment of a counting apparatus based on a statistical quantity model according to the present application;
FIG. 5 is a block diagram of an embodiment of an electronic device of the present application;
FIG. 6 is a block diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of a method for training a quantitative statistical model according to the present application.
An embodiment of the present application provides a method for training a quantitative statistical model, including the following steps:
s11: and acquiring a sample image, and labeling all sample objects in the sample image one by one.
Acquiring a sample image, acquiring the sample image by an acquisition device, and delimiting at least one counting area needing to carry out quantity statistics in the sample image. There may be a plurality of counting regions in the sample image, and each counting region may be an arbitrary polygon.
And (3) marking the sample objects in each counting area one by one, wherein the marking method is generally used for marking the external rectangles of the sample objects, and the marked samples need to reach a certain number and can be used for training a number statistical model. The sample object may be a person and the counting area may be a certain venue or venue. The sample object may also be poultry or livestock and the counting area may be a certain poultry house or barn.
S12: and extracting sample image characteristics of the sample image.
In one embodiment, the quantitative statistical model may be implemented according to an LC-FCN network, in which Resnet-50 is used as a backbone network to extract sample image features, which gradually abstract the features through continuously extended convolutional layers, shallow features retaining more location information, and deep features retaining more abstract features.
As the backbone network is deepened, feature extraction is more abstract, but loss occurs to spatial position information, in order to determine whether each pixel is a positive sample (i.e., belongs to a sample object), the spatial information needs to be restored, that is, output of the backbone network is up-sampled layer by layer, and finally, sample image features consistent with the size of the input sample image are obtained.
S13: a sample region containing the sample object in the sample image feature is extracted.
The sample image features comprise a foreground and a background, wherein the foreground is a sample object, and the background is a background of a non-sample object. Extracting a sample region containing a sample object in the sample image feature comprises: and calculating the probability that each pixel point in the sample image characteristics belongs to the target object, namely belongs to the foreground, and dividing the pixel points with the probability greater than a preset threshold value into sample areas. The predetermined threshold value may be preset or adjusted according to the extraction effect of the network model.
S14: and searching the number of connected domains in the sample area.
The number of connected components in the sample area is found, one for each sample object, so that the number of connected components represents the number of sample objects.
S15: and training a quantity statistical model according to the direction of one-to-one correspondence of each sample object and the connected domain.
The quantity statistical model is trained in the direction in which each sample object corresponds to a connected domain one-to-one, so that the quantity statistical model can be used for accurate counting. Specifically, the quantitative statistical model may be trained using a loss function, such that each sample object corresponds to a connected domain one-to-one, where the loss function includes an image-level loss function, a pixel-level loss function, a segmentation loss function, and a false positive loss function.
Generally, a neural network model used for detection or semantic segmentation compares information such as categories, shapes, sizes and positions of attention targets, so that a loss function can be optimized on the problems, but the quantitative statistical model in the application does not care about characteristics such as sizes and shapes, so that the categories of all pixel points of each sample image do not need to be calculated one by one according to a semantic segmentation mode, and only connected positive sample small regions, namely connected domains, are used for representing a single sample. Therefore, the loss function is also optimized correspondingly, and comprises four parts, namely an image level, a pixel point level, segmentation and a false positive loss function. Wherein the image-level loss function is mainly used to increase the probability of positive samples (sample objects) occurring and to decrease the probability of negative samples (non-sample objects); the pixel point level loss function mainly acts on the marked positive sample points, and the function and the image level loss function together enable each pixel point in the sample image to obtain corresponding label information; the segmentation loss function is used for controlling each sample object to output a separate connected domain, and the false positive loss function is used for removing the connected domain without the sample object. In this way, the four loss functions act together to optimize the quantitative statistical model towards the direction that each sample object has one and only one connected domain, so that the sample objects correspond to the connected domains one to one.
The quantity statistical model trained by the training method can be used for quantity statistics, and the quantity of the sample objects is determined by the trained quantity statistical model according to the positions of the sample objects, so that the quantity is automatically counted. The method does not need other hardware equipment except the image acquisition device, can be reused by the same scheme for different application scenes of the same type of sample object, and saves the construction cost and the construction cost. And the ear tag of the RFID chip is not needed, so that the livestock is not wounded, and meanwhile, the cost is reduced. In addition, compared with Gaussian fuzzy characteristics, the quantity statistical model applied by the method can obtain more accurate and robust characteristics, so that the quantity counting is more accurate, meanwhile, the method is also suitable for sample object motion and dense scenes, still can keep higher accuracy, and is more widely suitable for scenes compared with a single-characteristic algorithm.
Referring to fig. 2, fig. 2 is a flowchart illustrating an embodiment of a counting method based on a quantity statistical model according to the present application.
Another embodiment of the present application provides a counting method based on a quantity statistical model, including the following steps:
s21: and acquiring a target image.
Acquiring a target image, acquiring the target image by acquisition equipment, and delimiting at least one counting area needing to carry out quantity statistics in the target image. There may be multiple counting regions in the target image, and each counting region may be any polygon.
The target object may be a person, for example, counting the number of people in a certain venue or site. The sample object may also be poultry or livestock, for example the number of cultures in a certain poultry house or barn of a statistical farm.
S22: and extracting the target image characteristics of the target image.
In one embodiment, the quantitative statistical model may be implemented according to an LC-FCN network, in which Resnet-50 is used as a backbone network to extract target image features, which gradually abstract the features through continuously extended convolutional layers, shallow features retain more location information, and deep features retain more abstract features.
As the backbone network is deepened, feature extraction is more abstract, but loss occurs to spatial position information, in order to determine whether each pixel is a positive sample (i.e., belongs to a sample object), the spatial information needs to be restored, that is, output of the backbone network is up-sampled layer by layer, and finally, sample image features consistent with the size of an input target image are obtained.
It should be noted that the quantitative statistical model may be a training method of the quantitative statistical model in any of the above embodiments through pre-training of the target object recognition, and in other embodiments, may also be another training method.
S23: and extracting a target area containing the target object in the target image characteristic.
The target image characteristics comprise a foreground and a background, wherein the foreground is a target object, and the background is a background of a non-target object. Extracting the sample region containing the target object in the target image feature comprises: and calculating the probability that each pixel point in the target image characteristics belongs to a target object, namely belongs to the foreground, and dividing the pixel points with the probability greater than a preset threshold value into target areas. The predetermined threshold value may be preset or adjusted according to the extraction effect of the quantity statistical model.
S24: and searching the number of connected domains in the target area.
And (4) outputting an independent connected domain by each target object through the trained quantity statistical model, and searching the number of the connected domains in the counting area.
S25: and taking the number of the connected domains as the number of the target objects.
The target objects correspond to the connected domains one by one, so that the number of the connected domains can be used as the number of the target objects.
The counting method determines the number of the target objects according to the positions of the target objects, and realizes automatic counting of the number. Need not other hardware equipment except image acquisition device, can use same scheme to the pouity dwelling place of different crowds gathering venues, or plant, animal house and multiplex, practice thrift construction cost. And the ear tag of the RFID chip is not needed, so that the livestock is not wounded, and meanwhile, the cost is reduced. The counting method is also suitable for the target object motion and the dense scene, and can keep high accuracy.
Referring to fig. 3, fig. 3 is a block diagram illustrating an embodiment of a device for training a statistical quantity model according to the present application.
The present application further provides a training device 30 for a quantitative statistical model, which includes an image acquisition module 31, a feature extraction module 32, a counting module 33, and a training module 34 connected in sequence. The image obtaining module 31 obtains the sample images, and labels the sample objects in the sample images one by one. The feature extraction module 32 extracts a sample image feature of the sample image, and extracts a sample region containing the sample object in the sample image feature. The counting module 33 finds the number of connected domains in the sample area. The training module 34 trains the quantitative statistical model in the direction in which each sample object corresponds to a connected component.
The quantity statistical model trained by the training device 30 can be used for quantity statistics, and the quantity statistical model at the training position can determine the quantity of the sample object according to the position of the sample object, so that the quantity is automatically counted. Need not other hardware equipment except image acquisition device, can use same scheme to the pouity dwelling place of different crowds gathering venues, or plant, animal house and multiplex, practice thrift construction cost. And the ear tag of the RFID chip is not needed, so that the livestock is not wounded, and meanwhile, the cost is reduced. In addition, compared with Gaussian fuzzy characteristics, the quantity statistical model applied by the method can obtain more accurate and robust characteristics, so that the quantity counting is more accurate, meanwhile, the method is also suitable for sample object motion and dense scenes, still can keep higher accuracy, and is more widely suitable for scenes compared with a single-characteristic algorithm.
Referring to fig. 4, fig. 4 is a block diagram illustrating an embodiment of a counting apparatus based on a quantity statistical model according to the present application.
The application further provides a counting device 40 based on a quantity statistical model, which comprises an image acquisition module 41, a feature extraction module 42 and a counting module 42 connected in sequence. Wherein the image acquisition module 41 acquires the target image. The feature extraction module 42 extracts a target image feature of the target image, and extracts a target region including the target object in the target image feature. The counting module 43 searches for the number of connected domains in the target area, and takes the number of connected domains as the number of target objects.
The counting device 40 of the present application determines the number of the target objects according to the positions of the target objects, and realizes automatic counting of the number. The poultry house and the livestock house of different crowd gathering venues or farms can be reused by the same scheme, and construction cost is saved. And the ear tag of the RFID chip is not needed, so that the livestock is not wounded, and meanwhile, the cost is reduced. The counting method is also suitable for the target object motion and the dense scene, and can keep high accuracy.
Referring to fig. 5, fig. 5 is a schematic diagram of a frame of an embodiment of an electronic device according to the present application.
Yet another embodiment of the present application provides an electronic device 50, which includes a memory 51 and a processor 52 coupled to each other, wherein the processor 52 is configured to execute program instructions stored in the memory 51 to implement the method for training the quantity statistical model according to any of the above embodiments, or the method for counting based on the quantity statistical model according to any of the above embodiments. In one particular implementation scenario, electronic device 50 may include, but is not limited to: a microcomputer, a server, and the electronic device 50 may also include a mobile device such as a notebook computer, a tablet computer, and the like, which is not limited herein.
Specifically, the processor 52 is configured to control itself and the memory 51 to implement the training method of the quantitative statistical model of any of the above embodiments, or the counting method based on the quantitative statistical model of any of the above embodiments. Processor 52 may also be referred to as a CPU (Central Processing Unit). Processor 52 may be an integrated circuit chip having signal processing capabilities. The Processor 52 may also be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 52 may be commonly implemented by an integrated circuit chip.
Referring to fig. 6, fig. 6 is a block diagram illustrating an embodiment of a computer-readable storage medium according to the present application.
Yet another embodiment of the present application provides a computer-readable storage medium 60, on which program data 61 are stored, and when executed by a processor, the program data 61 implement the method for training a quantity statistical model according to any one of the above embodiments, or the method for counting based on a quantity statistical model according to any one of the above embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed method and apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, a division of a module or a unit is merely one type of logical division, and an actual implementation may have another division, for example, a unit or a component may be combined or integrated with another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some interfaces, and may be in an electrical, mechanical or other form.
Units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on network elements. Some or all of the units can be selected according to actual needs to achieve the purpose of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium 60. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium 60 and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method of the embodiments of the present application. And the aforementioned storage medium 60 includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are intended to be included within the scope of the present application.

Claims (10)

1. A method for training a quantitative statistical model, comprising:
acquiring sample images, and labeling sample objects in the sample images one by one;
extracting sample image features of the sample image;
extracting a sample region in the sample image feature that includes the sample object;
searching the number of connected domains in the sample area;
and training the quantity statistical model according to the direction of one-to-one correspondence of each sample object and the connected domain.
2. The method of claim 1, wherein said labeling the sample objects in the sample image one by one comprises:
at least one counting area is defined in the sample image, and the sample objects in the counting area are labeled one by one.
3. The method of claim 1, wherein said extracting a sample region containing the sample object in the sample image feature comprises:
calculating the probability that each pixel point in the sample image characteristics belongs to the target sample;
and dividing the pixel points with the probability greater than a preset threshold value into the sample areas.
4. The method of claim 1, wherein said training said quantitative statistical model in a direction in which each of said sample objects has a one-to-one correspondence with said connected components comprises:
and training the quantity statistical model by using a loss function so that each sample object corresponds to the connected domain one by one, wherein the loss function comprises an image-level loss function, a pixel-level loss function, a segmentation loss function and a false positive loss function.
5. The method of claim 1, wherein the sample object comprises at least one of a human, poultry, and livestock.
6. A counting method based on a quantity statistical model is characterized by comprising the following steps:
acquiring a target image;
extracting target image features of the target image;
extracting a target area containing a target object in the target image characteristics;
searching the number of connected domains in the target area;
and taking the number of the connected domains as the number of the target objects.
7. The method of claim 6, wherein the target objects comprise humans, poultry and/or livestock.
8. The method of claim 6, wherein the statistical quantity model is trained by the training method of any one of claims 1-5.
9. An electronic device comprising a memory and a processor coupled to each other, the processor being configured to execute program instructions stored in the memory to implement the training method of any one of claims 1 to 5 or the counting method of any one of claims 6 to 8.
10. A computer-readable storage medium, on which program data are stored, which program data, when being executed by a processor, carry out the training method of any one of claims 1 to 5 or the counting method of any one of claims 6 to 8.
CN202010673863.0A 2020-07-14 2020-07-14 Training method, counting method, equipment and storage medium of quantity statistical model Pending CN112001884A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010673863.0A CN112001884A (en) 2020-07-14 2020-07-14 Training method, counting method, equipment and storage medium of quantity statistical model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010673863.0A CN112001884A (en) 2020-07-14 2020-07-14 Training method, counting method, equipment and storage medium of quantity statistical model

Publications (1)

Publication Number Publication Date
CN112001884A true CN112001884A (en) 2020-11-27

Family

ID=73467612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010673863.0A Pending CN112001884A (en) 2020-07-14 2020-07-14 Training method, counting method, equipment and storage medium of quantity statistical model

Country Status (1)

Country Link
CN (1) CN112001884A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113643136A (en) * 2021-09-01 2021-11-12 京东科技信息技术有限公司 Information processing method, system and device

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101144716A (en) * 2007-10-15 2008-03-19 清华大学 Multiple angle movement target detection, positioning and aligning method
CN105427275A (en) * 2015-10-29 2016-03-23 中国农业大学 Filed environment wheat head counting method and device
CN106846837A (en) * 2017-03-27 2017-06-13 广州大学 A kind of traffic light intelligent control system, traffic lights intelligent control method and device
US20180349671A1 (en) * 2015-09-16 2018-12-06 Merck Patent Gmbh A method for early detection and identification of microbial-colonies, apparatus for performing the method and computer program
CN110378873A (en) * 2019-06-11 2019-10-25 上海交通大学 Rice Panicle strain grain based on deep learning lossless method of counting in situ
CN110569747A (en) * 2019-08-20 2019-12-13 南京农业大学 method for rapidly counting rice ears of paddy field rice by using image pyramid and fast-RCNN
CN110688924A (en) * 2019-09-19 2020-01-14 天津天地伟业机器人技术有限公司 RFCN-based vertical monocular passenger flow volume statistical method
CN110826592A (en) * 2019-09-25 2020-02-21 浙江大学宁波理工学院 Prawn culture residual bait counting method based on full convolution neural network
CN110969641A (en) * 2018-09-30 2020-04-07 北京京东尚科信息技术有限公司 Image processing method and device
CN111242234A (en) * 2020-01-17 2020-06-05 深圳力维智联技术有限公司 Image target detection method and device, terminal equipment and storage medium
CN111292347A (en) * 2020-01-21 2020-06-16 海南大学 Microscopic image anthrax spore density calculation method based on image processing technology
CN111311603A (en) * 2018-12-12 2020-06-19 北京京东尚科信息技术有限公司 Method and apparatus for outputting target object number information

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101144716A (en) * 2007-10-15 2008-03-19 清华大学 Multiple angle movement target detection, positioning and aligning method
US20180349671A1 (en) * 2015-09-16 2018-12-06 Merck Patent Gmbh A method for early detection and identification of microbial-colonies, apparatus for performing the method and computer program
CN105427275A (en) * 2015-10-29 2016-03-23 中国农业大学 Filed environment wheat head counting method and device
CN106846837A (en) * 2017-03-27 2017-06-13 广州大学 A kind of traffic light intelligent control system, traffic lights intelligent control method and device
CN110969641A (en) * 2018-09-30 2020-04-07 北京京东尚科信息技术有限公司 Image processing method and device
CN111311603A (en) * 2018-12-12 2020-06-19 北京京东尚科信息技术有限公司 Method and apparatus for outputting target object number information
CN110378873A (en) * 2019-06-11 2019-10-25 上海交通大学 Rice Panicle strain grain based on deep learning lossless method of counting in situ
CN110569747A (en) * 2019-08-20 2019-12-13 南京农业大学 method for rapidly counting rice ears of paddy field rice by using image pyramid and fast-RCNN
CN110688924A (en) * 2019-09-19 2020-01-14 天津天地伟业机器人技术有限公司 RFCN-based vertical monocular passenger flow volume statistical method
CN110826592A (en) * 2019-09-25 2020-02-21 浙江大学宁波理工学院 Prawn culture residual bait counting method based on full convolution neural network
CN111242234A (en) * 2020-01-17 2020-06-05 深圳力维智联技术有限公司 Image target detection method and device, terminal equipment and storage medium
CN111292347A (en) * 2020-01-21 2020-06-16 海南大学 Microscopic image anthrax spore density calculation method based on image processing technology

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113643136A (en) * 2021-09-01 2021-11-12 京东科技信息技术有限公司 Information processing method, system and device

Similar Documents

Publication Publication Date Title
Bjerge et al. Real‐time insect tracking and monitoring with computer vision and deep learning
Wang et al. YOLOv3‐Litchi Detection Method of Densely Distributed Litchi in Large Vision Scenes
Sarwar et al. Detecting sheep in UAV images
Le et al. An automated fish counting algorithm in aquaculture based on image processing
Li et al. DeepCotton: in-field cotton segmentation using deep fully convolutional network
US11501113B2 (en) Leveraging smart-phone cameras and image processing techniques to classify mosquito genus and species
CN111291887B (en) Neural network training method, image recognition device and electronic equipment
Huang et al. Depth semantic segmentation of tobacco planting areas from unmanned aerial vehicle remote sensing images in plateau mountains
Zin et al. Cow identification system using ear tag recognition
Biglari et al. A vision-based cattle recognition system using tensorflow for livestock water intake monitoring
CN112001884A (en) Training method, counting method, equipment and storage medium of quantity statistical model
CN113313098B (en) Video processing method, device, system and storage medium
Guo et al. Pigeon cleaning behavior detection algorithm based on light-weight network
Venkatesvara Rao et al. Real-time video object detection and classification using hybrid texture feature extraction
Chelbi et al. A practical implementation of mask detection for COVID-19 using face detection and histogram of oriented gradients
Xingshi et al. Light-weight recognition network for dairy cows based on the fusion of YOLOv5s and channel pruning algorithm.
CN116246298A (en) Space occupation people counting method, terminal equipment and storage medium
Bai et al. Transmission line voltage classes identification based on particle swarm optimization algorithm and PCNN
Hipiny et al. Towards Automated Biometric Identification of Sea Turtles (Chelonia mydas)
Xu et al. Sheep Counting Method Based on Multiscale Module Deep Neural Network
Sun et al. Sheep delivery scene detection based on faster-RCNN
CN111311603B (en) Method and device for outputting number information of target objects
CN116416440B (en) Target recognition method, model training method, device, medium and electronic equipment
Vatsa et al. Enumeration of Birds using Video Segmentation for a Better Understanding of Bird Behaviors
CN112069860B (en) Method and device for identifying dairy cows based on limb posture images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination