CN108829762B - Vision-based small target identification method and device - Google Patents

Vision-based small target identification method and device Download PDF

Info

Publication number
CN108829762B
CN108829762B CN201810523284.0A CN201810523284A CN108829762B CN 108829762 B CN108829762 B CN 108829762B CN 201810523284 A CN201810523284 A CN 201810523284A CN 108829762 B CN108829762 B CN 108829762B
Authority
CN
China
Prior art keywords
image
database
small target
data
pixel points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810523284.0A
Other languages
Chinese (zh)
Other versions
CN108829762A (en
Inventor
田志博
张强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spider Iot Technology Beijing Co ltd
Original Assignee
Spider Iot Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spider Iot Technology Beijing Co ltd filed Critical Spider Iot Technology Beijing Co ltd
Priority to CN201810523284.0A priority Critical patent/CN108829762B/en
Publication of CN108829762A publication Critical patent/CN108829762A/en
Application granted granted Critical
Publication of CN108829762B publication Critical patent/CN108829762B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a small target identification method and device based on vision. The method comprises the following steps: collecting an image of a certain area; identifying a specific small target in the image; storing the image in a first database in the presence of the small object in the image; according to the acquisition time of the image, performing data cleaning on the image stored in the first database and storing the image in the second database; and displaying the image in the first database and/or the second database and information related to the image. The method adopts a vision technology to realize the recognition and detection of the mouse in the designated area, the acquired data is stored in a first database, and a part of pictures are stored in a second database according to the acquisition time of the image, so that a user can quickly check the pictures stored in the second database.

Description

Vision-based small target identification method and device
Technical Field
The present application relates to the field of image recognition and processing technologies, and in particular, to a method and an apparatus for recognizing a small target based on vision.
Background
The traditional mouse situation recognition method generally adopts traditional methods, such as a powder method, a mousetrap method, a mouse sticking board and a visual method, so that the original method has a plurality of defects in mouse situation recognition, the powder method usually spreads powder in a specified area, whether a living being passes through is judged according to the trace left after the living being passes through, whether the living being is a mouse is judged through a footprint, but the recognition method can only recognize whether the mouse passes through but cannot count the specific time and the rule of the mouse, and the method is limited by a plurality of factors such as the environment, and the like: weather, humidity, wind, whether there is other pet or biological damage, etc. The mousetrap method and the mouse sticking plate method are only suitable for catching or killing mice, but if the mice caught by the mousetrap or the mouse sticking plate are not cleaned in time by the owner, the mice die because of being trapped, but germs on the mice do not disappear because of the death of the mice, and just because the dead bodies of the mice go moldy and go bad because of not being cleaned in time, the germs can grow and spread at the rate of geometric progression, the mice killing not only can not kill the germs, but also can enable the germs to spread rapidly, and meanwhile, the dead bodies of the mice can pollute organisms at the upper end of the mouse food chain secondarily, for example: animals such as cats, snakes, dogs, etc. The mouse information obtained by the visual inspection method is not different from that of a waiting rabbit, and the mouse is usually irrevocably managed by a specially-assigned person. Although there is a method for recognizing a small creature by using an image recognition technology in the prior art, if a camera is used for taking pictures uninterruptedly, a large amount of image data is generated, and when the data is stored in a database, the viewing speed is slow due to a large amount of data, and a user cannot obtain a mouse detection picture at the current time quickly.
Disclosure of Invention
It is an object of the present application to overcome the above problems or to at least partially solve or mitigate the above problems.
According to an aspect of the present application, there is provided a vision-based small object recognition method, including:
an image acquisition step: collecting an image of a certain area;
small target identification: identifying a specific small target in the image;
an image storage step: storing the image in a first database if the small target is present in the image;
an image cleaning step: according to the acquisition time of the image, performing data cleaning on the image stored in a first database and storing the image in a second database; and
and a data display step: and displaying the image in the first database and/or the second database and information related to the image.
The method adopts a vision technology to realize the recognition and detection of the mouse in the designated area, the acquired data is stored in a first database, and a part of pictures are stored in a second database according to the acquisition time of the image, so that a user can quickly check the pictures stored in the second database.
Optionally, after the small target identifying step, the method further includes:
and (3) counting the quantity: and under the condition that the small target exists in the image, counting the number value of the small target, temporarily storing the number value, carrying out small target identification on the image behind the image, adjusting the number value according to the identification result, and taking the current number value as the increment of the counting value of the number of the small targets when the small target cannot be identified from the subsequent image.
Optionally, the first database includes one or more of the following databases: the system comprises a big data platform, a data warehouse, a real-time database, a time sequence database, a distributed relational database, a distributed non-relational database and a database based on distributed file storage; the second database is a relational database.
Optionally, the data displaying step includes:
dividing the collected images into different groups according to the time labels, and displaying corresponding image data lists according to the time labels selected by the user;
and inquiring in the first database and/or the second database according to the inquiry condition of the user, and displaying the inquiry result to the user.
Optionally, the number of the image capturing devices is two or more, and the image capturing devices are arranged at different positions in the same area or in different areas.
Optionally, the small target identification step includes: and identifying a specific small target in the image through artificial intelligence.
According to another aspect of the present application, there is also provided a vision-based small object recognition apparatus, including:
an image acquisition module configured to acquire an image of a region;
a small target identification module configured to identify a specific small target in the image;
an image storage module configured to store the image in a first database if the small target is present in the image;
an image cleaning module configured to perform data cleaning on the image stored in a first database according to the acquisition time of the image and store the image in a second database; and
a data display module configured to display information related to the image to enable a user to view the information and/or the image.
The device adopts visual technology to realize the recognition and detection of the mouse in the designated area, the collected data is stored in the first database, and a part of pictures are stored in the second database according to the collection time of the image, so that a user can quickly check the pictures stored in the second database.
Optionally, the apparatus further includes a number counting module, the small target identification module is further connected to the number counting module, and the number counting module is configured to count a number value of the small target when the small target is detected to be present in the image, temporarily store the number value, perform small target identification on an image subsequent to the image, adjust the number value according to an identification result, and when the small target cannot be identified from a subsequent image, use a current number value as an increment of the counted number value of the small target.
Optionally, the data display module is configured to: dividing the collected images into different groups according to the time labels, and displaying corresponding image data lists according to the time labels selected by the user; and inquiring in the first database and/or the second database according to the inquiry condition of the user, and displaying the inquiry result to the user.
Optionally, the image acquisition module and the small target identification module are connected through the internet of things.
The above and other objects, advantages and features of the present application will become more apparent to those skilled in the art from the following detailed description of specific embodiments thereof, taken in conjunction with the accompanying drawings.
Drawings
Some specific embodiments of the present application will be described in detail hereinafter by way of illustration and not limitation with reference to the accompanying drawings. The same reference numbers in the drawings identify the same or similar elements or components. Those skilled in the art will appreciate that the drawings are not necessarily drawn to scale. In the drawings:
FIG. 1 is a schematic flow chart diagram illustrating one embodiment of a vision-based small object recognition method in accordance with the present application;
FIGS. 2, 3 and 4 are schematic representations of recognition results obtained according to the method of the present application;
FIG. 5 is a schematic flow chart diagram illustrating another embodiment of a vision-based small object recognition method in accordance with the present application;
FIG. 6 is a schematic diagram of one embodiment of a data display interface;
FIG. 7 is a schematic view of another embodiment of a data display interface;
FIG. 8 is a schematic view of another embodiment of a data display interface;
FIG. 9 is a schematic flow chart diagram illustrating one embodiment of a vision-based small object recognition device in accordance with the present application;
FIG. 10 is a schematic flow chart diagram illustrating another embodiment of a vision-based small object recognition device in accordance with the present application;
FIG. 11 is a block diagram of one embodiment of a computing device of the present application;
FIG. 12 is a block diagram of one embodiment of a computer-readable storage medium of the present application.
Detailed Description
The above and other objects, advantages and features of the present application will become more apparent to those skilled in the art from the following detailed description of specific embodiments thereof, taken in conjunction with the accompanying drawings.
An embodiment of the present application provides a method for small target recognition based on vision, and fig. 1 is a schematic flow chart of an embodiment of the method for small target recognition based on vision according to the present application. The method comprises the following steps:
s100, image acquisition: collecting an image of a certain area;
s200, small target identification: identifying a specific small target in the image;
s400, an image storage step: storing the image in a first database if the small target is present in the image;
s500, image cleaning: according to the acquisition time of the image, performing data cleaning on the image stored in a first database and storing the image in a second database; and
s600, data display step: and displaying the image in the first database and/or the second database and information related to the image.
The method adopts a vision technology to realize the recognition and detection of the mouse in the designated area, the acquired data is stored in a first database, and a part of pictures are stored in a second database according to the acquisition time of the image, so that a user can quickly check the pictures stored in the second database.
In the image capturing step, an image of the detection area may be captured by a network camera, which optionally has a night vision function so as to be able to take a picture even in the absence of a light source at night. The number of the cameras may be one or more, and is determined according to the range of the area. A plurality of cameras can be arranged at different positions of the same area and can also be arranged in different areas, so that remote acquisition and processing can be simultaneously carried out on multiple areas. The frequency at which images are acquired may be set as desired, for example, to be acquired once per second. And the collected pictures are uploaded to a video data center through the Internet of things, and the video data center processes the pictures. For example, the video data center stores a small target recognition algorithm, which can recognize a small target in an image. The small target may be a small organism, such as a mouse or the like.
In this step, small object recognition may be performed by artificial intelligence or the like. Artificial intelligence may include machine learning, deep learning, and neural networks. For example, small object recognition may be performed by a particle swarm algorithm, a genetic algorithm, a greedy algorithm, or an ant colony algorithm.
Optionally, in the small target identification step, a specific small target in the image is identified through artificial intelligence. For example, a particular small target in the image may be identified by a trained deep learning model. The step can realize the automatic identification of single or multiple mice.
In the identification step, a background model establishing step is included: taking a first image in an image set as a background model, and establishing an initial sample set for each pixel point of the first image, wherein the initial sample set comprises pixel values of pixel points adjacent to the pixel point; classifying pixel points: comparing a second sample set of pixel points in a second image of the image set with the initial sample set of corresponding pixel points in the first image, and dividing the pixel points of the second image into foreground points or background points, wherein the sample set of the pixel points in the second image comprises pixel points adjacent to the pixel points; contour determination: determining outlines formed by pixel points which are divided into foregrounds in the second image so as to form a foreground region picture; and a target detection step: and classifying the foreground region pictures by using the trained classification model so as to determine the target to be detected. Wherein the second image may be the image acquired in the image acquisition step. The first image is used as the background model, so that the large consumption of the background model for establishing the memory can be greatly reduced, and the operation speed is accelerated; compared with the traditional feature point matching classification method, the artificial intelligence classification method has higher classification precision and is more flexible.
Optionally, before the background model establishing step, the identifying step further includes: training a classification model: and training the classification model by using the background sample set and the target sample set to be detected, and obtaining the trained classification model under the condition that the accuracy of the classification model reaches a first threshold value. The steps can establish a high-precision classifier by utilizing artificial intelligence on a certain number of small target sample sets to be detected and background negative sample sets, train the manufactured classification sample sets to generate artificial intelligence models, and effectively classify the target samples and the negative samples.
Optionally, the step of classifying the pixel points includes: and making a difference between each element in the second sample set and each element in the initial sample set, setting a pixel point corresponding to the second sample set as a foreground point under the condition that all difference values are greater than a second threshold value, otherwise setting the pixel point as a background point, carrying out binarization processing on the pixel point of the second image according to the foreground point and the background point, and adding the pixel point set as the background point into a background model.
Optionally, the contour extraction step comprises: discrete foreground points are eliminated through opening and closing operation, and pixel points in an area formed by enclosing the foreground points are set as foreground points through integral operation. By adopting the method, continuous contours can be obtained, and the interference of noise is eliminated.
Optionally, after the target detecting step, the method further comprises; updating a classification model: and when the classification result obtained by classifying the foreground region picture by using the trained classification model is the background, reversely updating the background model by using the classification result. The method can feed back and update the classification model through the result, so that the classification model is more accurate, and the speed and the effect of subsequent target detection are improved.
Fig. 2, 3 and 4 are schematic diagrams of recognition results obtained according to the method of the present application. Fig. 2 to 4 show images taken during an experiment in a restaurant, which are recognized by small objects to obtain recognition results, and outline the small objects with outline lines in the images. It can be seen that the detection result of the near-to-field of view in fig. 2 is one mouse and the outline is relatively complete; in fig. 3 there are two rats, although far from the field of view, which can also be detected, showing an easily distinguishable profile; in FIG. 4, there were three mice, and the number of the test results was also three. It is understood that the method can also be used for detecting more mice simultaneously. Therefore, the method can accurately identify the small target.
Optionally, fig. 5 is a schematic flow chart diagram of another embodiment of a vision-based small-object recognition method according to the present application. After the step of S200 identifying the small target, the method further includes:
s300, number counting step: and under the condition that the small target exists in the image, counting the number value of the small target, temporarily storing the number value, carrying out small target identification on the image behind the image, adjusting the number value according to the identification result, and taking the current number value as the increment of the counting value of the number of the small targets when the small target cannot be identified from the subsequent image.
And storing the pictures containing the detection results, and obtaining historical data and latest data according to the time sequence. The historical data can be data of one day, several days, one month or even longer, and the user can conveniently extract the detection number in a certain period at any time. The most recent data may be the most recent minutes, hours, or even longer. By the historical data and the latest data, the detection condition of the mouse can be checked, and the undetected rate and the false-detected rate of the system can be calculated. For example, if a mouse is not present in the range of the camera in a certain day, the presence of missed detection in the day is judged. For example, a mouse has just been counted and counted but later disappears from the image, the data of the mouse being false positive for the subsequent statistical data.
This application adopts artificial intelligence to discern the image, can realize anti-jamming function. The background is interfered by factors such as illumination, shelters, surrounding environment and the like, and the detection result is influenced. Through artificial intelligence, the interference of background change and surrounding environment to mouse detection can be eliminated, the possibility of mistaken grabbing can be avoided, and the accuracy of mouse detection and identification is ensured.
The time interval at which the images are taken can be self-defined. The image acquisition interval is set according to a specific environment, three pictures are generally generated every second, and if the mouse emergence frequency of the current experimental site is higher, the time interval for setting the image shooting can be set to be smaller.
In the step of storing the image S400, the acquired image is transmitted to a first database in real time by a big data stream processing technique, wherein the first database is a non-relational database. The first database may comprise one or more of the following databases: big data platform, data warehouse, real-time database, time sequence database, distributed relational database, distributed non-relational database and database based on distributed file storage. The number of the first databases may be plural.
And in the step S500, according to the acquisition time of the image, performing data cleaning on the image stored in the first database and storing the image in the second database. The most recently collected data may be stored in a second database, which is a relational database, based on the collection time. If the user accesses the historical data through a website or an application program (APP), the user queries the pictures or the information by accessing the historical report in the first database, and if the user wants to check the latest data, the user obtains the latest data from a common relational database or a cache database. It will be appreciated that the most current data may also be obtained via a relational database. The data cleaning is particularly important because the conditions of incomplete pictures, inconsistent stored information and abnormal data exist in the massive original data stored in the first database, and the query and display of a user on a terminal are influenced, and the data cleaning is mainly realized by unifying the data formats of different first databases through consistency check, processing invalid values, missing values and the like, so that the quality of the data is improved.
Due to the limitation of the storage capacity of the relational database, a storage bottleneck problem exists. By adopting the mode, the latest data is stored in the relational database, and the historical data is stored in the big data warehouse or the distributed database, so that not only can all the data not be lost, the overflow of the data or computer resources be prevented, but also the access speed of the latest data can be provided.
Optionally, the S600 data displaying step includes: dividing the collected images into different groups according to the time labels, and displaying corresponding image data lists according to the time labels selected by the user;
and inquiring in the first database and/or the second database according to the inquiry condition of the user, and displaying the inquiry result to the user.
FIG. 6 is a schematic diagram of one embodiment of a data display interface. The listing of various data is circled with boxes in fig. 6 for ease of understanding, it being understood that in an actual display, these boxes may not be present.
In the data display interface of fig. 6, the acquired images are divided into different groups according to time labels, and the time labels include: the current day, the current week, the last 1 month, the last 3 months or more, and the corresponding image data list is displayed according to the time tag selected by the user. Fig. 7 is a diagram of another embodiment of a data display interface through which a user can view pictures taken at corresponding times. The user can input a query condition in the input box, the query condition can be an interval value of the capture quantity, and the query is performed in the first database and/or the second database according to the query condition, and the query result is displayed to the user. For example, if the user inputs 50 to 100 in the input box indicating the number of captures, the time stamp selected by the user is "near 3 months", the second database stores the data of the week, the first database stores all the data, the first database and the second database are queried for the data with the statistical value of 50 to 100, and the data list is displayed to the user.
FIG. 8 is a schematic diagram of another embodiment of a data display interface. The diagram shows the result of small target statistics on pictures taken by a plurality of cameras. The horizontal axis indicates the number of the image pickup device, and the vertical axis indicates the statistical value of the small target over a certain period of time. The method can carry out statistics according to day, week, month and year, and can also carry out statistical analysis respectively for each camera device, all camera devices and camera devices in certain areas.
Optionally, after the data displaying step, the method further comprises: early warning step: and when the statistic value exceeds a set threshold value, generating a prompt instruction.
According to the method, the images of the detection area are acquired through the network camera, the mouse situation information can be timely and accurately counted, and the method has the characteristics of high accuracy and high timeliness. The image information acquired by the method transmits data to the data storage device in a network mode, and a big data technology is adopted to ensure that hardware bottleneck can not occur in the mass data storage and calculation process. After entering the data warehouse, the mouse information is automatically identified whether to exist in the image or not through artificial intelligence analysis, if not, the data is ignored, if the mouse information is identified, the captured mouse information is completely recorded in a related data table of the data warehouse through a program only, and related image data is reserved so as to facilitate later checking and data statistics. The method is suitable for various monitoring places, can realize 24-hour uninterrupted monitoring throughout the day, has a visual statistical analysis function, obtains real and reliable data, can automatically gather the data, automatically carries out early warning and reminding, and saves a large amount of manpower and material resources cost.
The embodiment of the application also provides a small target recognition device based on vision. FIG. 9 is a schematic flow chart diagram illustrating one embodiment of a vision-based small object recognition apparatus according to the present application. The device includes:
an image acquisition module 100 configured to acquire an image of a region;
a small target recognition module 200 configured to recognize a specific small target in the image;
an image storage module 400 configured to store the image in a first database if the small target is present in the image;
an image cleaning module 500 configured to perform data cleaning on the image stored in the first database according to the acquisition time of the image and store the image in the second database; and
a data display module 600 configured to display information related to the image to enable a user to view the information and/or the image.
The device adopts visual technology to realize the recognition and detection of the mouse in the designated area, the collected data is stored in the first database, and a part of pictures are stored in the second database according to the collection time of the image, so that a user can quickly check the pictures stored in the second database.
FIG. 10 is a schematic flow chart diagram of another embodiment of a vision-based small object recognition device according to the present application. Optionally, the apparatus further includes a quantity statistics module 300, the small target identification module is further connected to the quantity statistics module, and the quantity statistics module is configured to, in a case that the small target is detected to exist in the image, count a quantity value of the small target, temporarily store the quantity value, perform small target identification on an image subsequent to the image through the artificial intelligence, adjust the quantity value according to an identification result, and when the small target cannot be identified from a subsequent image, use a current quantity value as an increment of a small target quantity statistics value.
Optionally, the data display module 600 is configured to: dividing the collected images into different groups according to the time labels, and displaying corresponding image data lists according to the time labels selected by the user; and inquiring in the first database and/or the second database according to the inquiry condition of the user, and displaying the inquiry result to the user.
Optionally, the device further includes an early warning module, connected to the data display module, and configured to generate a prompt instruction when the statistical value exceeds a set threshold.
Further provided is a computing device, referring to fig. 11, comprising a memory 1120, a processor 1110 and a computer program stored in said memory 1120 and executable by said processor 1110, the computer program being stored in a space 1130 for program code in the memory 1120, the computer program, when executed by the processor 1110, implementing the method steps 1131 for performing any of the methods according to the invention.
The embodiment of the application also provides a computer readable storage medium. Referring to fig. 12, the computer readable storage medium comprises a storage unit for program code provided with a program 1131' for performing the steps of the method according to the invention, which program is executed by a processor.
The embodiment of the application also provides a computer program product containing instructions. When the computer program product is run on a computer, the computer is caused to perform the method steps according to the application.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed by a computer, cause the computer to perform, in whole or in part, the procedures or functions described in accordance with the embodiments of the application. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be understood by those skilled in the art that all or part of the steps in the method for implementing the above embodiments may be implemented by a program, and the program may be stored in a computer-readable storage medium, where the storage medium is a non-transitory medium, such as a random access memory, a read only memory, a flash memory, a hard disk, a solid state disk, a magnetic tape (magnetic tape), a floppy disk (floppy disk), an optical disk (optical disk), and any combination thereof.
The above description is only for the preferred embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A vision-based small object recognition method, comprising:
an image acquisition step: collecting an image of a certain area;
small target identification: identifying a specific small target in the image;
an image storage step: storing the image in a first database if the small target is present in the image;
an image cleaning step: according to the acquisition time of the image, performing data cleaning on the image stored in the first database and storing the image in a second database, wherein the data cleaning comprises the following steps: unifying the data formats of different first databases through consistency check, and processing invalid values and missing values; and
and a data display step: displaying the images in the first database and/or the second database and information related to the images;
the identifying the specific small target in the image comprises:
establishing a background model: taking a first image in an image set as a background model, and establishing an initial sample set for each pixel point of the first image, wherein the initial sample set comprises pixel values of pixel points adjacent to the pixel point;
classifying pixel points: comparing a second sample set of pixel points in a second image of the image set with the initial sample set of corresponding pixel points in the first image, and dividing the pixel points of the second image into foreground points or background points, wherein the sample set of the pixel points in the second image comprises pixel points adjacent to the pixel points;
contour determination: determining outlines formed by pixel points which are divided into foregrounds in the second image so as to form a foreground region picture; and
and a target detection step: and classifying the foreground region pictures by using the trained classification model so as to determine the target to be detected.
2. The method of claim 1, wherein after the small object identifying step, the method further comprises:
and (3) counting the quantity: and under the condition that the small target exists in the image, counting the number value of the small target, temporarily storing the number value, carrying out small target identification on the image behind the image, adjusting the number value according to the identification result, and taking the current number value as the increment of the counting value of the number of the small targets when the small target cannot be identified from the subsequent image.
3. The method of claim 1, wherein the first database comprises one or more of the following databases: the system comprises a big data platform, a data warehouse, a real-time database, a time sequence database, a distributed relational database, a distributed non-relational database and a database based on distributed file storage; the second database is a relational database.
4. The method according to any one of claims 1 to 3, wherein the data displaying step comprises:
dividing the collected images into different groups according to the time labels, and displaying corresponding image data lists according to the time labels selected by the user;
and inquiring in the first database and/or the second database according to the inquiry condition of the user, and displaying the inquiry result to the user.
5. The method according to claim 4, wherein the number of the image capturing devices is two or more, and the image capturing devices are arranged at different positions in the same area or in different areas.
6. The method of claim 1, wherein the small target identification step comprises: and identifying a specific small target in the image through artificial intelligence.
7. A vision-based small object recognition device, comprising:
an image acquisition module configured to acquire an image of a region;
a small target identification module configured to identify a specific small target in the image;
an image storage module configured to store the image in a first database if the small target is present in the image;
an image cleansing module configured to data cleanse the image stored in a first database and store in a second database according to an acquisition time of the image, the data cleansing comprising: unifying the data formats of different first databases through consistency check, and processing invalid values and missing values; and
a data display module configured to display information related to the image to enable a user to view the information and/or the image;
the small target recognition module recognizing a specific small target in the image comprises:
taking a first image in an image set as a background model, and establishing an initial sample set for each pixel point of the first image, wherein the initial sample set comprises pixel values of pixel points adjacent to the pixel point;
comparing a second sample set of pixel points in a second image of the image set with the initial sample set of corresponding pixel points in the first image, and dividing the pixel points of the second image into foreground points or background points, wherein the sample set of the pixel points in the second image comprises pixel points adjacent to the pixel points;
determining outlines formed by pixel points which are divided into foregrounds in the second image so as to form a foreground region picture; and
and classifying the foreground region pictures by using the trained classification model so as to determine the target to be detected.
8. The apparatus of claim 7, further comprising a number statistics module, wherein the small target recognition module is further connected to the number statistics module, and the number statistics module is configured to, in a case where the small target is detected to be present in the image, count a number value of the small target, temporarily store the number value, perform small target recognition on an image subsequent to the image, adjust the number value according to a recognition result, and when the small target cannot be recognized from a subsequent image, take a current number value as an increment of the count value of the number of small targets.
9. The apparatus of claim 7 or 8, wherein the data display module is configured to: dividing the collected images into different groups according to the time labels, and displaying corresponding image data lists according to the time labels selected by the user; and inquiring in the first database and/or the second database according to the inquiry condition of the user, and displaying the inquiry result to the user.
10. The apparatus of claim 7, wherein the image acquisition module and the small target recognition module are connected through the internet of things.
CN201810523284.0A 2018-05-28 2018-05-28 Vision-based small target identification method and device Active CN108829762B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810523284.0A CN108829762B (en) 2018-05-28 2018-05-28 Vision-based small target identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810523284.0A CN108829762B (en) 2018-05-28 2018-05-28 Vision-based small target identification method and device

Publications (2)

Publication Number Publication Date
CN108829762A CN108829762A (en) 2018-11-16
CN108829762B true CN108829762B (en) 2020-09-04

Family

ID=64146224

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810523284.0A Active CN108829762B (en) 2018-05-28 2018-05-28 Vision-based small target identification method and device

Country Status (1)

Country Link
CN (1) CN108829762B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109511642B (en) * 2018-11-29 2021-05-11 陕西师范大学 Grassland deratization method and system based on image and cluster analysis
CN109934099A (en) * 2019-01-24 2019-06-25 北京明略软件系统有限公司 Reminding method and device, storage medium, the electronic device of placement location
CN109886129B (en) * 2019-01-24 2020-08-11 北京明略软件系统有限公司 Prompt message generation method and device, storage medium and electronic device
CN110472492A (en) * 2019-07-05 2019-11-19 平安国际智慧城市科技股份有限公司 Target organism detection method, device, computer equipment and storage medium
CN110532893A (en) * 2019-08-05 2019-12-03 西安电子科技大学 Icon detection method in the competing small map image of electricity

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102317929A (en) * 2009-02-18 2012-01-11 A9.Com有限公司 Method and system for image matching
CN105373813A (en) * 2015-12-24 2016-03-02 四川华雁信息产业股份有限公司 Equipment state image monitoring method and device
CN107358190A (en) * 2017-07-07 2017-11-17 广东中星电子有限公司 A kind of image key area management method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040037450A1 (en) * 2002-08-22 2004-02-26 Bradski Gary R. Method, apparatus and system for using computer vision to identify facial characteristics
US8903198B2 (en) * 2011-06-03 2014-12-02 International Business Machines Corporation Image ranking based on attribute correlation
CN104284162A (en) * 2014-10-29 2015-01-14 广州中国科学院软件应用技术研究所 Video retrieval method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102317929A (en) * 2009-02-18 2012-01-11 A9.Com有限公司 Method and system for image matching
CN105373813A (en) * 2015-12-24 2016-03-02 四川华雁信息产业股份有限公司 Equipment state image monitoring method and device
CN107358190A (en) * 2017-07-07 2017-11-17 广东中星电子有限公司 A kind of image key area management method and device

Also Published As

Publication number Publication date
CN108829762A (en) 2018-11-16

Similar Documents

Publication Publication Date Title
CN108829762B (en) Vision-based small target identification method and device
CN108874910B (en) Vision-based small target recognition system
CN108717523B (en) Sow oestrus behavior detection method based on machine vision
CN110619620B (en) Method, device and system for positioning abnormity causing surface defects and electronic equipment
CN108319926A (en) A kind of the safety cap wearing detecting system and detection method of building-site
CN109922310A (en) The monitoring method of target object, apparatus and system
CN110991222B (en) Object state monitoring and sow oestrus monitoring method, device and system
CN111898581A (en) Animal detection method, device, electronic equipment and readable storage medium
CN111310596A (en) Animal diseased state monitoring system and method
CN115482465A (en) Crop disease and insect pest prediction method and system based on machine vision and storage medium
CN111275705A (en) Intelligent cloth inspecting method and device, electronic equipment and storage medium
CN115661650A (en) Farm management system based on data monitoring of Internet of things
CN115861721B (en) Livestock and poultry breeding spraying equipment state identification method based on image data
CN116543347A (en) Intelligent insect condition on-line monitoring system, method, device and medium
CN114511820A (en) Goods shelf commodity detection method and device, computer equipment and storage medium
CN111797831A (en) BIM and artificial intelligence based parallel abnormality detection method for poultry feeding
US20210022322A1 (en) Method and system for extraction of statistical sample of moving objects
CN112528823B (en) Method and system for analyzing batcharybus movement behavior based on key frame detection and semantic component segmentation
Wang et al. Automatic identification and analysis of multi-object cattle rumination based on computer vision
CN111539350A (en) Intelligent identification method for crop diseases and insect pests
CN111523472A (en) Active target counting method and device based on machine vision
CN116189076A (en) Observation and identification system and method for bird observation station
CN115468598A (en) Intelligent monitoring method and system for pigsty environment
CN113628253A (en) Method and system for accurately detecting individual health of animal and storage medium
KR102424901B1 (en) method for detecting estrus of cattle based on object detection algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant