CN109101547B - Management method and device for wild animals - Google Patents

Management method and device for wild animals Download PDF

Info

Publication number
CN109101547B
CN109101547B CN201810730913.7A CN201810730913A CN109101547B CN 109101547 B CN109101547 B CN 109101547B CN 201810730913 A CN201810730913 A CN 201810730913A CN 109101547 B CN109101547 B CN 109101547B
Authority
CN
China
Prior art keywords
target
user
data
training
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810730913.7A
Other languages
Chinese (zh)
Other versions
CN109101547A (en
Inventor
王汉洋
王弘尧
刘鑫
董硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Giai Intelligent Technology Co ltd
Original Assignee
Beijing Giai Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Giai Intelligent Technology Co ltd filed Critical Beijing Giai Intelligent Technology Co ltd
Priority to CN201810730913.7A priority Critical patent/CN109101547B/en
Publication of CN109101547A publication Critical patent/CN109101547A/en
Application granted granted Critical
Publication of CN109101547B publication Critical patent/CN109101547B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application discloses a management method and a management device for wild animals. The management method for the wild animals comprises the steps of collecting image data of the wild animals; inputting the image data into an image recognition device and recognizing a target object; and storing the target objects according to preset target addresses in a classified mode. The technical problem that wild animal monitoring management is difficult to realize is solved in this application. The application realizes the following technical effects: a large amount of video data shot by the infrared camera can be automatically processed in batches, wild animals can be classified according to species by using a neural network method, and the occurrence frequency can be counted. Greatly reduces the time spent on data processing by the personnel related to animal protection, and ensures the accuracy of identification and classification.

Description

Management method and device for wild animals
Technical Field
The application relates to the field of machine learning and image recognition, in particular to a management method and device for wild animals.
Background
At present, the infrared camera is widely used in the wild animal research, monitoring and protection activities at home and abroad to acquire images or videos of wild animals at noon. When an animal appears in the infrared camera shooting range, the infrared camera can be triggered to shoot by sensing the temperature of the animal. And (3) related organizations such as animal protection and the like periodically recover data in the camera and perform centralized processing on the camera data.
The shortcoming of the realization of the animal identification function in the material shot by the current infrared camera is as follows: many times, pictures or videos shot by the infrared camera are fuzzy, unclear and drilled at an angle, and people and eyes need to find a battle where animals are connected. In as short as several years, the infrared images and videos of a single research organization can reach a state that the examination statistics are difficult to be completed manually. Each picture is recognized by human eyes, a mouse is dragged to different folders for classification, data analysis is carried out, and the whole process can hardly be completed by manpower.
Aiming at the problem that the monitoring and management of wild animals are not easy to realize in the related art, no effective solution is provided at present.
Disclosure of Invention
The application mainly aims to provide a management method and a management device for wild animals, so as to solve the problem that monitoring and management of wild animals are not easy to realize.
In order to achieve the above object, according to one aspect of the present application, there is provided a management method for wild animals.
The management method for wild animals according to the present application comprises:
collecting image data of wild animals;
inputting the image data into an image recognition device and recognizing a target object; and
and storing the target objects according to preset target addresses in a classified manner.
Further, inputting the image data into an image recognition device and recognizing the object includes:
receiving user-selected tag data;
obtaining training data according to a preset image training degree input by a user; and
and generating an image recognition model for recognizing the expected recognition target according to the training data.
Further, the step of storing the target objects according to the preset target addresses in a classified manner comprises the following steps:
classifying and processing the image data of the wild animal pictures to obtain target picture storage addresses;
classifying and storing the wild animal picture image data through the target storage address; and
and classifying the classified storage result as a plurality of picture target folders.
Further, the step of storing the target objects according to the preset target addresses in a classified manner comprises the following steps:
classifying the wild animal video image data to obtain a target video storage address;
classifying and storing the wild animal video image data through the target storage address; and
and classifying the classified storage result as a plurality of video target folders.
Further, after the target objects are classified and stored according to preset target addresses, the method further comprises the following steps:
counting the target objects related to the folder names according to the folder names stored in a classified mode in preset target addresses;
and generating a report of the occurrence frequency of the wild animal species through the target statistics.
In order to achieve the above object, according to another aspect of the present application, there is provided a management device for wild animals.
The management device for wild animals according to the present application comprises:
the acquisition module is used for acquiring image data of wild animals;
the input module is used for inputting the image data into an image recognition device and recognizing a target object; and
and the storage module is used for storing the target objects according to preset target addresses in a classified manner.
Further, the input module includes:
a receiving unit for receiving tag data selected by a user;
the training unit is used for obtaining training data according to a preset image training degree input by a user; and
and the generating unit is used for generating an image recognition model for recognizing the expected recognition target according to the training data.
Further, the storage module includes:
the first classification unit is used for classifying and processing the wild animal picture image data to obtain a target picture storage address;
the first storage unit is used for storing the wild animal picture image data in a classified manner through the target storage address; and
and the first classification unit classifies the classified storage result as a plurality of picture target folders.
Further, the storage module includes:
the second classification unit is used for classifying and processing the wild animal video image data to obtain a target video storage address;
the second storage unit is used for storing the wild animal video image data in a classified manner through the target storage address; and
and the second classification unit is used for classifying the classified storage results into a plurality of video target folders.
Further, the management apparatus further includes:
the statistical module is used for counting the target objects related to the folder names according to the folder names stored in a classified mode in the preset target address;
and the generating module is used for generating a report of the occurrence frequency of the wild animal species according to the target object statistical result.
In the embodiment of the application, the image data of the wild animals are collected and input into the image recognition device, so that the purpose of recognizing the target objects in the images is achieved, the technical effect that the target objects are stored according to the preset target addresses in a classified mode is achieved, the wild animals are monitored in a classified mode, and the technical problem that monitoring and management of the wild animals are not easy to achieve is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, serve to provide a further understanding of the application and to enable other features, objects, and advantages of the application to be more apparent. The drawings and their description illustrate the embodiments of the invention and do not limit it. In the drawings:
FIG. 1 is a schematic illustration of a management method for wild animals according to a first embodiment of the present application;
FIG. 2 is a schematic illustration of a management method for wildlife according to a second embodiment of the present application;
FIG. 3 is a schematic illustration of a management method for wildlife according to a third embodiment of the present application;
FIG. 4 is a schematic illustration of a management method for wildlife according to a fourth embodiment of the present application;
FIG. 5 is a schematic illustration of a management method for wild animals according to a fifth embodiment of the present application;
FIG. 6 is a schematic view of a management device for wildlife according to a first embodiment of the present application;
FIG. 7 is a schematic view of a management device for wildlife according to a second embodiment of the present application;
fig. 8 is a schematic view of a management device for wildlife according to a third embodiment of the present application;
fig. 9 is a schematic view of a management device for wildlife according to a fourth embodiment of the present application;
fig. 10 is a schematic view of a management device for wildlife according to a fifth embodiment of the present application; and
fig. 11 is a schematic flow diagram of a management method for wild animals according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In this application, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "middle", "vertical", "horizontal", "lateral", "longitudinal", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings. These terms are used primarily to better describe the present application and its embodiments, and are not used to limit the indicated devices, elements or components to a particular orientation or to be constructed and operated in a particular orientation.
Moreover, some of the above terms may be used to indicate other meanings besides the orientation or positional relationship, for example, the term "on" may also be used to indicate some kind of attachment or connection relationship in some cases. The specific meaning of these terms in this application will be understood by those of ordinary skill in the art as appropriate.
Furthermore, the terms "mounted," "disposed," "provided," "connected," and "sleeved" are to be construed broadly. For example, it may be a fixed connection, a removable connection, or a unitary construction; can be a mechanical connection, or an electrical connection; may be directly connected, or indirectly connected through intervening media, or may be in internal communication between two devices, elements or components. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art as appropriate.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
According to an embodiment of the present application, there is provided a method for managing wild animals, as shown in fig. 1, the method including:
as shown in fig. 1, the method includes steps S102 to S106 as follows:
step S102, collecting image data of wild animals;
preferably, the image data may be data taken by an infrared sensor, such as a picture or video.
The image data of the wild animals can be acquired by shooting the image data of the wild animals by an infrared sensor or a camera.
Step S104, inputting the image data into an image recognition device and recognizing a target object; and
preferably, the image data of the wild animals shot and collected by the infrared sensor or the camera is uploaded or input to the image recognition device to recognize the wild animals therein, for example, if the target object is a black bear, the black bear in the data is mainly recognized.
And S106, storing the target objects according to preset target addresses in a classified mode.
Preferably, the target wild animals identified in the previous step are sorted for storage.
The preset target address can be a preset folder or a preset magnetic disk for storing different wild animals.
As shown in fig. 2, inputting the image data into the image recognition apparatus and recognizing the object includes steps S202 to S206 as follows:
step S202, receiving mark data selected by a user;
preferably, the tag data may be a picture image tag of the picture image data, and may also be a video image tag of the video image data.
Receiving the user-selected marking data may be receiving the user-selected data type, file, tag, etc. at the marking system.
Step S204, training data is obtained according to the preset image training degree input by the user; and
preferably, the preset image training level may be a user's identity selection on his or her own level, such as a beginner or an expert.
The training data obtained according to the preset image training degree input by the user can be obtained by the user through identity selection (beginner or expert) according to the self level, and relevant information passes through a database server; if the user selects the identity of a beginner, the system recommends a network for the user; adding a marking project and starting marking; training is started.
The training data obtained according to the preset image training degree input by the user can also be obtained by the system recommending a network for the user if the user selects the 'expert' identity; adding a marking project and starting marking; selecting a network; setting parameters; training is started.
And step S206, generating an image recognition model for recognizing the expected recognition target according to the training data.
Preferably, the intended recognition target may be picture image data to be recognized provided by the user, and may also be video image data to be recognized provided by the user.
Generating an image recognition model for recognizing the expected recognition target according to the training data may be based on a selected network for which the system recommends a deployment; whether the model needs to be optimized or not, and selecting other deployment modes; and completing deployment.
Specifically, in step S202, the marker data selected by the user may be received as a data type, picture image data, or video image data. The receiving of the data type selected by the user may be receiving picture image data selected by the user or receiving video image data selected by the user. If the data type is picture image data, receiving a picture image tag selected by a user for a target object in the picture image data; and preferably, if the data type is picture image data, receiving a picture image label selected by a user for a target object in the picture image data, wherein the picture image label can be a black bear, a elephant, a panda and the like. And determining the position mark of the image label by the user. Preferably, user selectable position tags are provided in the system, such as picture top right, top left, middle, bottom right, bottom left, etc. Determining the location label of the image label by the user may be determining the location label of the image label according to the location label selected by the user. For example, a black bear is positioned at the upper right part of the picture, a panda is positioned at the lower left part of the picture, an elephant is positioned at the middle part of the picture, and the like.
Specifically, in step S202, the receiving of the mark data selected by the user may be receiving the data type selected by the user; preferably, the data type may be video image data, and may also be picture image data. The receiving of the data type selected by the user may be receiving video image data selected by the user or receiving picture image data selected by the user. If the data type is video image data, receiving a video image label selected by a user for an action frame in the video image data; and preferably the video image tag may be some action that has occurred between action frames, for example, a elephant grazing action has occurred between the 5 th frame to the 15 th frame. If the data type is video image data, receiving a user selection of a video image tag for an action frame in the video image data may be receiving a user selection of a plurality of video image tags in the video image data. And determining the action length of the frame segment of the video image label by the user. Preferably, the frame fragment action length may be the duration of time a certain action occurs. For example, determining the frame segment action length of the video image tag by the user may be determining that the elephant's draft action occurs between frame 5 and frame 15, but may also be an action occurring in any other markable video.
Specifically, in step S204, the obtaining of the training data according to the preset image training degree input by the user may be obtaining a first user processing identity according to the preset image training degree input by the user; preferably, the first user enters the training system, logs in the system or registers and provides user information, and relevant information passes through the database server; the user selects the identity (beginner or expert) according to the self level, and the related information passes through the database server; if the user selects the identity of a beginner, the system recommends a network for the user; if the user selects the "expert" identity, the system recommends the network for it. Recommending a network model to the first user according to the first user processing identity; preferably, the network model is recommended to the first user based on the identity tag selected by the first user. For example, a first user selects a "beginner" identity for which a simpler network model is recommended; as another example, the first user selects the "expert" identity for which a more complex network model is recommended. Importing first mark data selected by the first user; and for example, the first flag data may be a picture image tag of the picture image data and may also be a video image tag of the video image data. Importing the first marking data selected by the first user can be importing the data type, file, label and the like selected by the first user in a marking system. And determining the deployment mode of the network model according to the network model and the first mark data. Preferably, the network model deployment mode may be to input the imported first tag data into the recommended network model.
Specifically, in step S204, the obtaining of the training data according to the preset image training degree input by the user may be obtaining a second user processing identity according to the preset image training degree input by the user; preferably, the second user enters the training system, logs in the system or registers and provides user information, and related information passes through the database server; the user selects the identity (beginner or expert) according to the self level, and the related information passes through the database server; if the user selects the identity of a beginner, the system recommends a network for the user; if the user selects the "expert" identity, the system recommends the network for it. Opening a training data interface to the second user according to the second user processing identity; preferably, the step of opening the training data interface to the second user according to the second user process identity may be a step of providing the second user with an interface for starting a training model. Triggering second marking data selected by a second user according to data marking operation of the second user; preferably, the second tagged data is tagged according to the data that has been selected by the second user, and the corresponding data tag is invoked. Inputting the selected network model and the training parameters according to the training data interface; and determining the deployment mode of the network model through the network model, the training parameters and the second marking data. Preferably, the training and deployment modes of the network model are determined according to the recommended network model, the selected training parameters and the provided label data. For example, the network model deployment mode may be how many layers of the model are determined, and may also be a deployment mode such as Bridge Driver, Overlay Driver, and the like.
Specifically, in step S206, the step of generating an image recognition model for recognizing the expected recognition target according to the training data may be receiving login information of the user; preferably, the login information can be an account number and a password. The login information of the user can be the member identity and the history processing data of the user determined according to the account number, the password and other information input by the user. Determining an expected recognition target and import mark data set after a user logs in; preferably, the intended recognition target may be a picture image recognition target, and may also be a video image recognition target. The determination of the expected recognition target set after the user logs in and the importing of the tag data may be the determination of the recognition target of the user and the importing of the tag data selected by the user or historical tag data. Receiving a data generation operation instruction of a user; and preferably, the step of receiving the data generation operation instruction of the user may be to provide a generation operation control button after the user logs in, selects the expected recognition target and imports the mark data, and the background receives the operation instruction after the user clicks the operation control. And training and generating an image recognition model according to the expected recognition target and the marking data according to the data generation operation instruction. Preferably, an image recognition model is generated in the background according to the information acquired in the above steps, and a recognition result is provided for the user through recognition.
As shown in fig. 3, the step S302 to the step S306 of storing the objects according to the preset target addresses in a classified manner includes:
step S302, classifying the wild animal picture image data to obtain a target picture storage address;
step S304, classifying and storing the wild animal picture image data through the target storage address; and
and step S306, classifying the classified storage results as a plurality of picture target folders.
Preferably, the target picture storage address may be a folder or a disk or the like for storing different wild animals.
The target picture storage address obtained by classifying the wild animal picture image data can be obtained by storing different types of wild animals in different folders or disks.
As shown in fig. 4, the step S402 to the step S406 of storing the objects according to the preset target addresses in a classified manner includes:
step S402, classifying the wild animal video image data to obtain a target video storage address;
step S404, storing the wild animal video image data in a classified manner through the target storage address; and
step S406, classifying the classified storage results into a plurality of video target folders.
Preferably, the target video storage address may be a folder or a disk or the like for storing different wildlife.
The target picture storage address obtained by classifying the wild animal video image data can be obtained by storing different types of wild animals in different folders or disks.
As shown in fig. 5, after storing the objects according to the preset target addresses in a classified manner, the following steps S502 to S504 are further included:
step S502, counting the target objects related to the folder names according to the folder names stored in a classified mode in preset target addresses;
preferably, the names of folders for storing image data of different kinds of wildlife are counted, and the associated objects are also stored under the folders.
For example, a black bear folder holds image data related to a black bear. When the target object is collected again and identified to be the black bear, the target object is directly and automatically stored in the black bear folder.
And step S504, generating a report of the occurrence frequency of the wild animal species according to the target object statistical result.
Preferably, the frequency of occurrence of the wild animal species is obtained by counting the image data in the folder, and a report is generated.
From the above description, it can be seen that the following technical effects are achieved by the present application: the invention can automatically process a large amount of video data shot by the infrared camera in batch, classify wild animals according to species by using a neural network method and count the occurrence frequency. After wild animals are shot by using an infrared camera, shot pictures or videos can be intelligently processed, the tedious and repeated work in the past takes a great amount of time for related personnel, and data is difficult to be processed completely only by manpower. And also has the problems of false detection, missed detection and the like. Under the condition that the shot data are the same, the application of the software greatly reduces the time spent by animal protection related personnel on data processing, and ensures the accuracy of identification and classification.
It should be noted that the steps illustrated in the flowcharts of the figures may be performed in a computer system such as a set of computer-executable instructions and that, although a logical order is illustrated in the flowcharts, in some cases, the steps illustrated or described may be performed in an order different than presented herein.
According to an embodiment of the present application, there is also provided a management apparatus for implementing the management method for wild animals described above, as shown in fig. 6, the apparatus including:
the acquisition module 10 is used for acquiring image data of wild animals;
an input module 20, configured to input the image data into an image recognition device and recognize a target object; and
and the storage module 30 is used for storing the target objects according to preset target addresses in a classified manner.
As shown in fig. 7, the input module 20 includes:
a receiving unit 201 for receiving the tag data selected by the user;
a training unit 202, configured to obtain training data according to a preset image training degree input by a user; and
a generating unit 203, configured to generate an image recognition model for recognizing the expected recognition target according to the training data.
As shown in fig. 8, the storage module 30 includes:
the first classification unit 301 is configured to classify the image data of the wild animal pictures to obtain target picture storage addresses;
a first storage unit 302, configured to store the wild animal image data in a classified manner by using the target storage address; and
the first classifying unit 303 classifies the classified storage result into a plurality of image object folders.
As shown in fig. 9, the storage module 30 includes:
the second classification unit 304 is used for classifying and processing the wild animal video image data to obtain a target video storage address;
a second storage unit 305, configured to store the wildlife video image data in a classified manner by using the target storage address; and
a second classifying unit 306, configured to classify the classified storage result into a plurality of video object folders.
As shown in fig. 10, the management apparatus further includes:
the counting module 40 is used for counting the target objects related to the folder names according to the folder names stored in a classified mode according to preset target addresses;
and the generating module 50 is used for generating a report of the occurrence frequency of the wild animal species according to the target object statistical result.
As shown in fig. 11, the process flow of the present invention is as follows:
transmitting data (pictures or videos) shot by infrared data into wild animal intelligent identification management application software; carrying out target object identification and classification processing on the data by using wild animal intelligent identification management application software, and classifying the data into a plurality of target folders; the wild animal intelligent identification management application software has a report generation function, and can easily realize visualization on various devices for the frequency of occurrence of various species and other interested statistical results of related organizations.
The invention can realize the following technical effects: 1) the software runs off line, so that higher safety guarantee is provided for customers; meticulous model design client manual processing of mass data; leading algorithms offer more application possibilities; the method has the functions of filing data by classes and generating reports, and can easily realize result visualization.
It will be apparent to those skilled in the art that the modules or steps of the present application described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and they may alternatively be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, or fabricated separately as individual integrated circuit modules, or fabricated as a single integrated circuit module from multiple modules or steps. Thus, the present application is not limited to any specific combination of hardware and software.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (8)

1. A method for managing wildlife, comprising:
collecting image data of wild animals;
inputting the image data into an image recognition device and recognizing a target object; and
storing the target objects according to preset target addresses in a classified manner;
inputting the image data into an image recognition device and recognizing a target object includes:
receiving user-selected tag data;
obtaining training data according to a preset image training degree input by a user; and
the obtaining of training data according to a preset image training degree input by a user comprises: acquiring a user processing identity according to a preset image training degree input by a user; recommending a network model to a user according to the user processing identity; the system recommends a network for the user according to the identity selection; adding a marking project, starting marking and starting training;
generating an image recognition model for recognizing an expected recognition target according to the training data;
the generating an image recognition model for recognizing an expected recognition target according to the training data comprises: recommending a network model deployment mode according to the selected network model; the network model deployment mode is determined by a network model, training parameters and marking data.
2. The method for managing as set forth in claim 1, wherein the classifying and storing the object according to the preset object address comprises:
classifying and processing the image data of the wild animal pictures to obtain target picture storage addresses;
classifying and storing the wild animal picture image data through the target storage address; and
and classifying the classified storage result as a plurality of picture target folders.
3. The method for managing as set forth in claim 1, wherein the classifying and storing the object according to the preset object address comprises:
classifying the wild animal video image data to obtain a target video storage address;
classifying and storing the wild animal video image data through the target storage address; and
and classifying the classified storage result as a plurality of video target folders.
4. The management method according to claim 1, wherein after storing the objects in a classified manner according to the preset object addresses, the method further comprises:
counting the target objects related to the folder names according to the folder names stored in a classified mode in preset target addresses;
and generating a report of the occurrence frequency of the wild animal species through the target statistics.
5. A management device for wildlife, comprising:
the acquisition module is used for acquiring image data of wild animals;
the input module is used for inputting the image data into an image recognition device and recognizing a target object; and
the storage module is used for storing the target objects according to preset target addresses in a classified mode;
the input module includes:
a receiving unit for receiving tag data selected by a user;
the training unit is used for obtaining training data according to a preset image training degree input by a user; and
the obtaining of training data according to a preset image training degree input by a user comprises: acquiring a user processing identity according to a preset image training degree input by a user; recommending a network model to the user according to the user processing identity; importing the marking data selected by the user; opening a training data interface to the user according to the user processing identity; calling a corresponding data tag according to the marking data selected by the user; inputting the selected network model and the training parameters according to the training data interface; determining a network model deployment mode through the network model, the training parameters and the marking data;
and the generating unit is used for generating an image recognition model for recognizing the expected recognition target according to the training data.
6. The management device according to claim 5, wherein the storage module includes:
the first classification unit is used for classifying and processing the wild animal picture image data to obtain a target picture storage address;
the first storage unit is used for storing the wild animal picture image data in a classified manner through the target storage address; and
and the first classification unit classifies the classified storage result as a plurality of picture target folders.
7. The management device according to claim 5, wherein the storage module includes:
the second classification unit is used for classifying and processing the wild animal video image data to obtain a target video storage address;
the second storage unit is used for storing the wild animal video image data in a classified manner through the target storage address; and
and the second classification unit is used for classifying the classified storage results into a plurality of video target folders.
8. The management apparatus according to claim 5, further comprising:
the statistical module is used for counting the target objects related to the folder names according to the folder names stored in a classified mode in the preset target address;
and the generating module is used for generating a report of the occurrence frequency of the wild animal species according to the target object statistical result.
CN201810730913.7A 2018-07-05 2018-07-05 Management method and device for wild animals Active CN109101547B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810730913.7A CN109101547B (en) 2018-07-05 2018-07-05 Management method and device for wild animals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810730913.7A CN109101547B (en) 2018-07-05 2018-07-05 Management method and device for wild animals

Publications (2)

Publication Number Publication Date
CN109101547A CN109101547A (en) 2018-12-28
CN109101547B true CN109101547B (en) 2021-11-12

Family

ID=64845535

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810730913.7A Active CN109101547B (en) 2018-07-05 2018-07-05 Management method and device for wild animals

Country Status (1)

Country Link
CN (1) CN109101547B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109770908A (en) * 2019-01-22 2019-05-21 山西农业大学 Wild animal population shoulder height measurement method based on infrared camera
CN110598461A (en) * 2019-09-27 2019-12-20 腾讯科技(深圳)有限公司 Wild animal information management method, device, terminal, system and storage medium
CN112069972A (en) * 2020-09-01 2020-12-11 安徽天立泰科技股份有限公司 Artificial intelligence-based ounce recognition algorithm and recognition monitoring platform
CN113178060A (en) * 2021-04-23 2021-07-27 知晓(北京)通信科技有限公司 Wild animal AI detection method and detection system
CN113435425B (en) * 2021-08-26 2021-12-07 绵阳职业技术学院 Wild animal emergence and emergence detection method based on recursive multi-feature fusion
CN113691548A (en) * 2021-08-27 2021-11-23 深圳供电局有限公司 Data acquisition and classified storage method and system thereof

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105138963A (en) * 2015-07-31 2015-12-09 小米科技有限责任公司 Picture scene judging method, picture scene judging device and server
CN105426455A (en) * 2015-11-12 2016-03-23 中国科学院重庆绿色智能技术研究院 Method and device for carrying out classified management on clothes on the basis of picture processing
CN106067028A (en) * 2015-04-19 2016-11-02 北京典赞科技有限公司 The modeling method of automatic machinery based on GPU study
CN106777334A (en) * 2017-01-12 2017-05-31 珠海格力电器股份有限公司 A kind of photo classification storage method, device and mobile terminal
CN107153844A (en) * 2017-05-12 2017-09-12 上海斐讯数据通信技术有限公司 The accessory system being improved to flowers identifying system and the method being improved
CN108133236A (en) * 2017-12-25 2018-06-08 城云科技(中国)有限公司 A kind of method, integrating device and the system of the classification of city management digital image recognition

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020183984A1 (en) * 2001-06-05 2002-12-05 Yining Deng Modular intelligent multimedia analysis system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106067028A (en) * 2015-04-19 2016-11-02 北京典赞科技有限公司 The modeling method of automatic machinery based on GPU study
CN105138963A (en) * 2015-07-31 2015-12-09 小米科技有限责任公司 Picture scene judging method, picture scene judging device and server
CN105426455A (en) * 2015-11-12 2016-03-23 中国科学院重庆绿色智能技术研究院 Method and device for carrying out classified management on clothes on the basis of picture processing
CN106777334A (en) * 2017-01-12 2017-05-31 珠海格力电器股份有限公司 A kind of photo classification storage method, device and mobile terminal
CN107153844A (en) * 2017-05-12 2017-09-12 上海斐讯数据通信技术有限公司 The accessory system being improved to flowers identifying system and the method being improved
CN108133236A (en) * 2017-12-25 2018-06-08 城云科技(中国)有限公司 A kind of method, integrating device and the system of the classification of city management digital image recognition

Also Published As

Publication number Publication date
CN109101547A (en) 2018-12-28

Similar Documents

Publication Publication Date Title
CN109101547B (en) Management method and device for wild animals
US20230136451A1 (en) Systems and methods for waste item detection and recognition
CN109166261A (en) Image processing method, device, equipment and storage medium based on image recognition
CN108170580A (en) A kind of rule-based log alarming method, apparatus and system
CN109829381A (en) A kind of dog only identifies management method, device, system and storage medium
CN109922310A (en) The monitoring method of target object, apparatus and system
CN107958435A (en) Safe examination system and the method for configuring rays safety detection apparatus
CN106113038A (en) Mode switching method based on robot and device
CN108229323A (en) Supervision method and device, electronic equipment, computer storage media
CN101553841A (en) Patient monitoring via image capture
CN109784274A (en) Identify the method trailed and Related product
CN108563675A (en) Electronic record automatic generation method and device based on target body characteristics
US20230177509A1 (en) Recognition method and device, security system, and storage medium
CN109213397B (en) Data processing method and device and user side
CN109284740A (en) Method, apparatus, equipment and the storage medium that mouse feelings are counted
CN111507574A (en) Security personnel deployment method and device, computer equipment and storage medium
CN109858332A (en) A kind of human behavior analysis method, device and electronic equipment
CN108874910A (en) The Small object identifying system of view-based access control model
CN110263830A (en) Image processing method, device and system and storage medium
CN109639456A (en) A kind of automation processing platform for the improved method and alarm data that automation alerts
CN110134810A (en) Retrieve the method and device of image
CN110505438A (en) A kind of acquisition methods and video camera of data queued
CN112150498A (en) Method and device for determining posture information, storage medium and electronic device
CN109120896B (en) Security video monitoring guard system
CN116343018A (en) Intelligent fishery fishing identification method, system and medium based on image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant