CN112841074A - Detection method, device, equipment and storage medium - Google Patents

Detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN112841074A
CN112841074A CN202011632752.1A CN202011632752A CN112841074A CN 112841074 A CN112841074 A CN 112841074A CN 202011632752 A CN202011632752 A CN 202011632752A CN 112841074 A CN112841074 A CN 112841074A
Authority
CN
China
Prior art keywords
information
detection
acquisition
detected object
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011632752.1A
Other languages
Chinese (zh)
Inventor
王永刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Shuke Haiyi Information Technology Co Ltd
Original Assignee
Jingdong Shuke Haiyi Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Shuke Haiyi Information Technology Co Ltd filed Critical Jingdong Shuke Haiyi Information Technology Co Ltd
Priority to CN202011632752.1A priority Critical patent/CN112841074A/en
Publication of CN112841074A publication Critical patent/CN112841074A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K29/00Other apparatus for animal husbandry
    • A01K29/005Monitoring or measuring activity, e.g. detecting heat or mating

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Environmental Sciences (AREA)
  • Biophysics (AREA)
  • Animal Husbandry (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Geophysics And Detection Of Objects (AREA)

Abstract

The application relates to a detection method, a detection device, detection equipment and a storage medium. The detection system comprises: the system comprises an identity identification unit for acquiring first identity information of a detected object, a first acquisition unit for acquiring first information used for reflecting whether the detected object exists in a detection area, a second acquisition unit for acquiring detection information of the detected object, and a control unit for identifying the detection information and the first information and determining detection parameters of the detected object. Therefore, when the detection parameters of a plurality of detected objects are to be detected, the plurality of detected objects can enter the detection area one by one in sequence, so that the detection parameters of each detected object can be obtained. Compared with the prior art, a set of detection hardware is not required to be arranged for each detected object, and the hardware cost is reduced.

Description

Detection method, device, equipment and storage medium
Technical Field
The present application relates to the field of computing, and in particular, to a detection method, apparatus, device, and storage medium.
Background
At present, when parameters such as body temperature, gait and the like of a cow are detected and obtained, a cow collar is equipped for each cow, a sensor is integrated in the cow collar, and the parameters such as the body temperature, the gait and the like of the cow are collected through the sensor. However, equipping each cow with a cow collar makes hardware and maintenance costs high.
Disclosure of Invention
The application provides a detection method, a detection device, detection equipment and a storage medium, which are used for solving the problem that in the prior art, each cow is provided with a cow collar, so that the hardware cost and the maintenance cost are high.
In a first aspect, a detection system is provided, which includes:
the control unit is respectively communicated with the identity recognition unit, the first acquisition unit and the second acquisition unit;
the identity recognition unit is used for acquiring first identity information of the detected object and outputting the first identity information to the control unit;
the first acquisition unit is used for acquiring first information used for reflecting whether the detected object exists in the detection area according to a first acquisition instruction and outputting the first information to the control unit;
the second acquisition unit is used for acquiring the detection information of the detected object according to a second acquisition instruction and outputting the detection information to the control unit;
the control unit is configured to output the first acquisition instruction to the first acquisition unit when determining that the first identity information is received, output the second acquisition instruction to the second acquisition unit when determining that the detected object exists in the detection area according to the first information, identify the detection information and the first information, and determine to obtain the detection parameter of the detected object.
Optionally, the first identity information comprises ear tag label data;
the first acquisition unit comprises a radio frequency card reader;
the radio frequency card reader is used for reading the ear tag data of the detected object.
Optionally, the first information comprises a first image;
the first acquisition unit comprises a first camera for acquiring the first image;
the control unit is used for:
and determining an image threshold value of the first image, and determining that the detected object exists in the detection area when the image threshold value is larger than a preset value.
Optionally, the detection information includes second information and third information; the second acquisition instruction comprises a first acquisition sub-instruction and a second acquisition sub-instruction;
the second acquisition unit includes:
a first acquisition subunit and a second acquisition subunit;
the first acquisition subunit is used for acquiring second information of the detected object according to the first acquisition sub-instruction and outputting the second information to the control unit;
the second acquisition subunit is configured to acquire third information of the detected object according to the second acquisition sub-instruction, and output the third information to the control unit;
the control unit is used for:
outputting the first acquisition sub-instruction to the first acquisition sub-unit; and outputting the second acquisition sub-instruction to the second acquisition sub-unit.
Optionally, the second information includes a second image, and the third information includes video data;
the first acquisition subunit comprises a second camera, and the second camera is used for acquiring the second image;
the second acquisition subunit comprises a third camera, and the third camera is used for acquiring the video data.
Optionally, the control unit is configured to:
identifying the first image to obtain the body condition parameters of the detected object;
identifying the second image to obtain a body temperature parameter of the detected object;
and identifying the video data to obtain the gait parameters of the detected object.
Optionally, the first camera includes a depth camera, and the second camera includes an infrared camera.
Optionally, the method further comprises:
a cloud platform in communication with the control unit;
the control unit is used for:
establishing a corresponding relation between the first identity information and the detection parameters, and sending the first identity information, the detection parameters and the corresponding relation to the cloud platform;
the cloud platform is used for storing the first identity information, the detection parameters and the corresponding relation.
Optionally, the control unit is configured to:
before establishing the corresponding relation between the first identity information and the detection parameters, acquiring an acquisition time stamp of the first information, a first acquisition time stamp of the first identity information and a second acquisition time stamp of second identity information, wherein the second identity information is acquired last time the first identity information is acquired;
and determining that the time interval between the acquisition time stamp and the first acquisition time stamp and the time interval between the acquisition time stamp and the second acquisition time stamp are both greater than a preset time threshold.
In a second aspect, a detection method is provided, including:
when first identity information of a detected object is determined to be received, first information reflecting whether the detected object exists in a detection area is acquired;
based on the first information, when the detected object exists in the detection area, the detection information of the detected object is obtained;
and identifying the first information and the detection information to obtain the detection parameters.
Optionally, the first information comprises a first image;
determining the existence of the detected object in the detection area based on the first information, including:
determining an image threshold for the first image;
and when the image threshold is larger than a preset value, determining that the detected object exists in the detection area.
Optionally, acquiring detection information of the detected object includes:
acquiring a second image and video data of the detected object;
identifying the first information and the detection information to obtain the detection parameters, including:
identifying the first image to obtain the body condition parameters of the detected object;
identifying the second image to obtain a body temperature parameter of the detected object;
and identifying the video data to obtain the gait parameters of the detected object.
Optionally, identifying the first information and the detection information, and after obtaining the detection parameter, further includes:
and establishing a corresponding relation between the first identity information and the detection parameters, and sending the first identity information, the detection parameters and the corresponding relation to a cloud platform.
Optionally, before establishing the corresponding relationship between the first identity information and the detection parameter, the method further includes:
acquiring a collection time stamp of the first information, a first acquisition time stamp of the first identity information and a second acquisition time stamp of second identity information, wherein the second identity information is acquired last time the first identity information is acquired;
and determining that the time interval between the acquisition time stamp and the first acquisition time stamp and the time interval between the acquisition time stamp and the second acquisition time stamp are both greater than a preset time threshold.
In a third aspect, an electronic device is provided, including: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
the memory for storing a computer program;
the processor is configured to execute the program stored in the memory to implement the method of the first aspect.
A fourth aspect provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the method of the first aspect.
A fifth aspect, a detection apparatus, comprising:
the device comprises an acquisition unit, a detection unit and a processing unit, wherein the acquisition unit is used for acquiring first information reflecting whether a detected object exists in a detection area or not when determining that first identity information of the detected object is received;
a determination unit configured to acquire detection information of the detected object when it is determined that the detected object exists in the detection area based on the first information;
and the identification unit is used for identifying the first information and the detection information to obtain the detection parameters.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages: the detection system provided by the embodiment of the application can obtain the detection parameters of the detected objects in the detection area by deploying the first identity recognition unit, the first acquisition unit, the second acquisition unit and the control unit, so that when the detection parameters of a plurality of detected objects are to be detected, the plurality of detected objects can enter the detection area one by one in sequence, and the detection parameters of each detected object are obtained.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a schematic structural diagram of a detection system according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of another exemplary detection system according to the present disclosure;
FIG. 3 is a schematic diagram of another exemplary detection system according to the present disclosure;
FIG. 4 is a schematic flow chart of a detection method in an embodiment of the present application;
FIG. 5 is a schematic structural diagram of an electronic device in an embodiment of the present application;
fig. 6 is a schematic structural diagram of a detection apparatus in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
An embodiment of the present application provides a detection system, as shown in fig. 1, including:
the system comprises a control unit 101, an identity recognition unit 102, a first acquisition unit 103 and a second acquisition unit 104 which are respectively communicated with the control unit 101.
Illustratively, the control unit 101 may communicate with the identification unit 102, the first acquisition unit 103 and the second acquisition unit 104 through a serial port, a USB and/or a network port.
An identity recognition unit 102, configured to acquire first identity information of the detected object, and output the first identity information to the control unit 101.
Optionally, in this embodiment, the detected object includes, but is not limited to, animals such as cattle or horses.
For example, the detected object may be a cow.
Optionally, in this embodiment, after the detection system is powered on, the identity recognition unit 102 is in a continuous working state, so that when the detected object is within the recognizable range of the identity recognition unit, the identity information of the detected object is obtained in time.
It is understood that, in view of the continuous operation state of the identification unit 102, after acquiring the identity information of the detected object, the identification unit 102 continuously outputs the identity information to the control unit 101.
Optionally, the first identity information includes, but is not limited to, ear tag data.
Accordingly, the identification unit 102 may include a Radio Frequency (RFID) card reader.
In practical application, the ear tag is penetrated through the ear part of the detected object, and when the detected object is located in the recognizable range of the RFID card reader, the RFID card reader reads the ear tag, so that the ear tag label data of the detected object is obtained.
The first acquisition unit 103 is configured to acquire first information reflecting whether the detected object exists in the detection area according to the first acquisition instruction, and output the first information to the control unit 101.
It is understood that, before receiving the first collecting instruction, the first collecting unit 103 may be in a standby state, and after receiving the first collecting instruction, the first collecting unit 103 is switched from the standby state to a starting state, so as to collect the first information.
The detection area is an area which is manually set and used for acquiring first information and detection information of a detected object, and an identity recognition unit 102, a first acquisition unit 103 and a second acquisition unit 104 are placed in the area in advance.
For example, the detection area may be a rectangular area capable of accommodating a bull, an iron stand is arranged in the area, and the identification unit, the first acquisition unit and the second acquisition unit may be placed on the iron stand.
Optionally, the first information may include a first image.
Correspondingly, the first acquisition unit 103 comprises a first camera for acquiring the first image.
Illustratively, the first camera may be a depth camera.
And the second acquisition unit 104 is configured to acquire detection information of the detected object according to the second acquisition instruction, and output the detection information to the control unit 101.
It is understood that the second collecting unit 104 may be in a standby state before receiving the second collecting instruction, and after receiving the second collecting instruction, the second collecting unit 104 is switched from the standby state to a start state, so as to collect the detection information.
Optionally, the second acquisition instruction may include a first acquisition sub-instruction and a second acquisition sub-instruction, and the detection information may include second information and third information.
The second acquisition unit, as shown in fig. 2, may include:
a first acquisition subunit 201 and a second acquisition subunit 202;
the first acquisition subunit 201 is configured to acquire second information of the detected object according to the first acquisition sub-instruction, and output the second information to the control unit 101;
and the first acquisition sub-instruction comprises the identification of the first acquisition sub-unit.
The second acquisition subunit 202 is configured to acquire third information of the detected object according to the second acquisition sub-instruction, and output the third information to the control unit 101;
and the second acquisition sub-instruction comprises the identifier of the second acquisition sub-unit.
For example, the first acquisition subunit 201 may be an infrared camera, and the second information of the detected object acquired by the first acquisition subunit 201 may be an infrared image.
For example, the second capturing subunit 202 may be a general camera, and the third information of the detected object captured by the second capturing subunit 202 may be video data.
The control unit 101 is configured to output a first acquisition instruction to the first acquisition unit 103 when determining that the first identity information is received, output a second acquisition instruction to the second acquisition unit 104 when determining that the detected object exists in the detection area according to the first information, identify the detection information and the first information, and determine to obtain the detection parameter of the detected object.
Illustratively, the control Unit 101 includes, but is not limited to, a NUC (Next Unit of Computing).
Alternatively, when the first information includes the first image, the control unit 101 is configured to, in implementing the determination that the detected object exists within the detection area according to the first information:
and determining an image threshold value of the first image, and determining that the detected object exists in the detection area when the image threshold value is larger than a preset value.
Wherein the image threshold of the first image is a ratio of a size of the gray background area to a size of the total background area in the first image.
It is understood that when the detected object does not exist in the detection area, the background of the first image is black, and when the detected object enters the detection area, the background of the portion of the first image corresponding to the detected object changes from black to gray.
The preset value can be set manually, for example, the preset value can be set to 0.1.
Exemplarily, taking a cow as an example, when no cow exists in the detection area, if it is determined that the image threshold of the first image is greater than 0.1, it is indicated that the cow head enters the detection area; similarly, when a cow exists in the detection area, if the image threshold of the first image is determined to be less than 0.1, the cow tail is about to leave the detection area. Therefore, when the image threshold is greater than 0.1, it is indicated that the cow is recognized to enter the detection area.
Optionally, when the second capture instruction includes the first capture sub-instruction and the second capture sub-instruction, the control unit 101 is configured to:
outputting a first acquisition sub-instruction to the first acquisition sub-unit 201; the second acquisition sub-instruction is output to the second acquisition sub-unit 202.
Alternatively, when the first information includes a first image, the detection information includes a second image from the first acquiring subunit 201 and video data from the second acquiring subunit 202, and when the detection information and the first information are identified and the detection parameter of the detected object is determined, the control unit 101 is configured to:
identifying the first image to obtain body condition parameters of the detected object;
identifying the second image to obtain the body temperature parameter of the detected object;
and identifying the video data to obtain the gait parameters of the detected object.
Illustratively, the body condition parameters can be indicative of the health condition of the subject, and may include body size parameters, weight estimation parameters, and parameters for assessing whether the subject has lumbar transverse processes, ischial tuberosities, fossa, hip tuberosities, or the like.
The body temperature parameters may include both high and low temperature parameters.
The gait parameters may include parameters for assessing the presence or absence of hoof disease, pregnancy, etc. in a subject.
Alternatively, an AI algorithm may be employed in identifying the first image, the second image, and the video data.
In addition, in consideration of the influence of the camera precision and environmental factors, when the body condition parameters, the body temperature parameters and the gait parameters are directly obtained, the obtained numerical errors are large, so that in the embodiment, when the first image, the second image and the video data are identified, accurate numerical values are not given to part of the body condition parameters, the body temperature parameters and the gait parameters, normalization processing is carried out, namely, only corresponding scores are given.
For example, the parameters for a cow can be found in table one:
watch 1
Hip joint Transverse process of lumbar vertebra Body weight High temperature Low temperature Bovine hoof disease Pregnancy
3.5 points 6 minutes 1 kilogram of a kilogram 5.2 points 3 points of 2 is divided into 7 points of
Optionally, in consideration of the fact that the number of images acquired by the depth camera and the infrared camera is large in the process that the detected object is in the detection area, if the control unit identifies all the images acquired by the depth camera and the infrared camera, the operation speed of the control unit is greatly affected, and the operation is quite unnecessary. Therefore, when the AI algorithm is adopted for identification, a preset number of images can be respectively extracted from the depth image and the infrared image for identification, so that the balance between the identification rate and the time loss is realized.
Alternatively, the preset number may be 8, that is, 8 images are extracted from each of the depth image and the infrared image for recognition.
Optionally, after the detection parameters are identified, the detection parameters may be sent to the cloud platform, so that the cloud platform calls and processes the detection parameters, and displays a final result to a user.
Thus further, the control system may further comprise a cloud platform in communication with the control unit;
illustratively, the cloud platform may be a Customer Premise Equipment (CPE).
Illustratively, the CPE may communicate with the control unit over WIFI.
Accordingly, the control unit 101 may also be configured to:
establishing a corresponding relation between the first identity information and the detection parameters, and sending the first identity information, the detection parameters and the corresponding relation to the cloud platform;
and the cloud platform is used for storing the first identity information, the detection parameters and the corresponding relation.
In practical application, after acquiring the first identity information and the detection parameters, the cloud platform can call the historical data of the detection object to longitudinally compare the physical condition of the detection object.
Taking a cow as an example, when the detection parameters comprise body condition parameters, body temperature parameters and gait parameters, the cloud platform calls historical body condition parameters of the cow, and compares the body condition parameters with the historical body condition parameters to determine the health conditions of the cow, such as whether the cow has hard injury, certain joints are abnormal, the cow is normal in generation and the like; calling historical gait parameters of the dairy cow, comparing the historical gait parameters with the gait parameters, and judging whether the walking process of the dairy cow is normal or not so as to judge whether the dairy cow has hoof diseases or not; further, by combining the historical body condition parameters, the historical gait parameters, the body condition parameters and the gait parameters, whether the cow is pregnant or not can be judged.
Optionally, the cloud platform may further count the recognition rate of each detection parameter of the detection object within a preset time.
The identification rate of each detection parameter comprises an ear tag identification rate, a cattle body identification rate, a false identification rate, a body condition identification rate, a weight estimation identification rate, a gait identification rate, a body size identification rate, a body temperature identification rate, a pregnancy identification rate and/or the like.
And the identification rate of the detection parameters is the number of the detection parameters identified in the preset time/the number of the detection objects in the preset time.
Taking the identification rate of the ear tag as an example, the number of the detection parameters identified within the preset time is the number of the ear tag data identified within the preset time.
Optionally, the cloud platform may further obtain a comprehensive identification rate according to the weight of each detection parameter and the identification rate of each detection parameter.
Wherein, the weight of each detection parameter can be set manually.
For example, for the recognition rate of a cow within four months, see table two:
watch two
Figure BDA0002880467860000111
In consideration of the following two false recognition conditions in the actual operation process, before the corresponding relationship between the first identity information and the detection parameters is established and the first identity information, the detection parameters and the corresponding relationship are sent to the cloud platform, the first identity information and the detection parameters can be identified by adopting the timestamps to screen invalid data.
The misrecognition conditions in two aspects can be (taking the cow as an example below):
firstly, the cows with the ear tags do not enter the detection area, but enter the readable range of the RFID card reader, and meanwhile, the cows without the ear tags just enter the detection area, so that the acquired tag data of the ear tags are not matched with the acquired depth image, and the false identification is caused.
Secondly, the dairy cows with the ear tags at two ends are close to each other and sequentially enter a detection area, so that the system cannot perform identification or mistaken identification.
To avoid false recognition caused by the above two situations, the control unit 101 may further be configured to:
acquiring an acquisition time stamp of the first information, a first acquisition time stamp of the first identity information and a second acquisition time stamp of the second identity information, wherein the second identity information is acquired last time the first identity information is acquired;
and determining that the time interval between the acquisition time stamp and the first acquisition time stamp and the time interval between the acquisition time stamp and the second acquisition time stamp are both greater than a preset time threshold.
The preset time threshold may be set manually, for example, the preset time threshold may be set to 100 ms.
And when the time interval between the acquisition time stamp and the first acquisition time stamp is not greater than the preset time threshold or the time interval between the acquisition time stamp and the second acquisition time stamp is not greater than the preset time threshold, all the data obtained this time are considered as illegal data, and all the data are directly discarded.
The detection system provided by the embodiment of the application can obtain the detection parameters of the detected objects in the detection area by deploying the first identity recognition unit, the first acquisition unit, the second acquisition unit and the control unit, so that when the detection parameters of a plurality of detected objects are to be detected, the plurality of detected objects can enter the detection area one by one in sequence, and the detection parameters of each detected object are obtained.
Illustratively, an embodiment of the present application provides a detection system, as shown in fig. 3, including:
the device comprises a NUC301, a cloud platform 302, a first camera 303, a second camera 304, a third camera 305 and an RFID card reader 306, wherein the cloud platform 302, the first camera 303, the second camera 304, the third camera 305 and the RFID card reader 306 are respectively communicated with the NUC 301;
the RFID reader 306 is configured to read ear tag data of a detected object, and output the ear tag data to the NUC 301;
the first camera 303 is configured to acquire a depth image of the detected object in the detection area according to a first acquisition instruction, and output the depth image to the NUC 301;
the second camera 304 is used for acquiring an infrared image of the detected object in the detection area according to the first acquisition sub-instruction and outputting the infrared image to the NUC 301;
the third camera 305 is used for acquiring video data of the detected object in the detection area according to the second acquisition sub-instruction and outputting the video data to the NUC 301;
the NUC301 is configured to output a first acquisition instruction to the first camera 303 when receiving the ear tag data, output a first acquisition sub-instruction to the second camera 304 when determining that the detected object exists in the detection region according to the depth image, and output a second acquisition sub-instruction to the third camera 305; recognizing the depth image to obtain body condition parameters of the detected object, recognizing the infrared image to obtain body temperature parameters of the detected object, and recognizing the video data to obtain gait parameters of the detected object; and establishing a corresponding relation between the ear tag label data and the body condition parameters, the body temperature parameters and the gait parameters, and sending the body condition parameters, the body temperature parameters, the gait parameters and the corresponding relation to the cloud platform 302.
Based on the same concept, embodiments of the present application provide a detection method, which may be applied to NUC, and specific embodiments of the method may refer to descriptions of an embodiment portion of an apparatus, and repeated details are not repeated, as shown in fig. 4, where the method mainly includes:
step 401, when determining that the first identity information of the detected object is received, acquiring first information reflecting whether the detected object exists in the detection area;
step 402, acquiring detection information of a detected object when the detected object exists in a detection area based on first information;
and step 403, identifying the first information and the detection information to obtain a detection parameter.
Optionally, the first information comprises a first image;
determining the existence of the detected object in the detection area based on the first information, comprising:
determining an image threshold for the first image;
and when the image threshold is larger than a preset value, determining that the detected object exists in the detection area.
Optionally, acquiring detection information of the detected object includes:
acquiring a second image and video data of the detected object;
identifying the first information and the detection information to obtain a detection parameter, including:
identifying the first image to obtain body condition parameters of the detected object;
identifying the second image to obtain the body temperature parameter of the detected object;
and identifying the video data to obtain the gait parameters of the detected object.
Optionally, identifying the first information and the detection information, and after obtaining the detection parameter, further includes:
and establishing a corresponding relation between the first identity information and the detection parameters, and sending the first identity information, the detection parameters and the corresponding relation to the cloud platform.
Optionally, before establishing the corresponding relationship between the first identity information and the detection parameter, the method further includes:
acquiring an acquisition time stamp of the first information, a first acquisition time stamp of the first identity information and a second acquisition time stamp of the second identity information, wherein the second identity information is acquired last time the first identity information is acquired;
and determining that the time interval between the acquisition time stamp and the first acquisition time stamp and the time interval between the acquisition time stamp and the second acquisition time stamp are both greater than a preset time threshold.
Based on the same concept, an embodiment of the present application further provides an electronic device, as shown in fig. 5, the electronic device mainly includes: a processor 501, a communication interface 502, a memory 503 and a communication bus 504, wherein the processor 501, the communication interface 502 and the memory 503 are communicated with each other through the communication bus 504. Wherein, the memory 503 stores the program that can be executed by the processor 501, and the processor 501 executes the program stored in the memory 503, implementing the following steps:
when first identity information of a detected object is determined to be received, first information reflecting whether the detected object exists in a detection area is acquired;
based on the first information, when the detected object exists in the detection area, the detection information of the detected object is obtained;
and identifying the first information and the detection information to obtain a detection parameter.
The communication bus 504 mentioned in the above electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus 504 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 5, but this is not intended to represent only one bus or type of bus.
The communication interface 502 is used for communication between the above-described electronic apparatus and other apparatuses.
The Memory 503 may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Alternatively, the memory may be at least one memory device located remotely from the aforementioned processor 501.
The Processor 501 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), etc., and may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic devices, discrete gates or transistor logic devices, and discrete hardware components.
Based on the same concept, an embodiment of the present application further provides a detection apparatus, as shown in fig. 6, including:
an acquisition unit 601 configured to acquire first information reflecting whether or not a detected object exists in a detection area when it is determined that first identity information of the detected object is received;
a determining unit 602, configured to obtain detection information of a detected object when determining that the detected object exists in the detection area based on the first information;
the identifying unit 603 is configured to identify the first information and the detection information to obtain a detection parameter.
In yet another embodiment of the present application, there is also provided a computer-readable storage medium having stored therein a computer program which, when run on a computer, causes the computer to perform the detection method described in the above embodiment.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wirelessly (e.g., infrared, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that includes one or more of the available media. The available media may be magnetic media (e.g., floppy disks, hard disks, tapes, etc.), optical media (e.g., DVDs), or semiconductor media (e.g., solid state drives), among others.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (17)

1. A detection system, comprising:
the control unit is respectively communicated with the identity recognition unit, the first acquisition unit and the second acquisition unit;
the identity recognition unit is used for acquiring first identity information of the detected object and outputting the first identity information to the control unit;
the first acquisition unit is used for acquiring first information used for reflecting whether the detected object exists in the detection area according to a first acquisition instruction and outputting the first information to the control unit;
the second acquisition unit is used for acquiring the detection information of the detected object according to a second acquisition instruction and outputting the detection information to the control unit;
the control unit is configured to output the first acquisition instruction to the first acquisition unit when determining that the first identity information is received, output the second acquisition instruction to the second acquisition unit when determining that the detected object exists in the detection area according to the first information, identify the detection information and the first information, and determine to obtain the detection parameter of the detected object.
2. The system of claim 1, wherein the first identity information comprises ear tag data;
the first acquisition unit comprises a radio frequency card reader;
the radio frequency card reader is used for reading the ear tag data of the detected object.
3. The system of claim 1, wherein the first information comprises a first image;
the first acquisition unit comprises a first camera for acquiring the first image;
the control unit is used for:
and determining an image threshold value of the first image, and determining that the detected object exists in the detection area when the image threshold value is larger than a preset value.
4. The system of claim 3, wherein the detection information includes second information and third information; the second acquisition instruction comprises a first acquisition sub-instruction and a second acquisition sub-instruction;
the second acquisition unit includes:
a first acquisition subunit and a second acquisition subunit;
the first acquisition subunit is used for acquiring second information of the detected object according to the first acquisition sub-instruction and outputting the second information to the control unit;
the second acquisition subunit is configured to acquire third information of the detected object according to the second acquisition sub-instruction, and output the third information to the control unit;
the control unit is used for:
outputting the first acquisition sub-instruction to the first acquisition sub-unit; and outputting the second acquisition sub-instruction to the second acquisition sub-unit.
5. The system of claim 4, wherein the second information comprises a second image and the third information comprises video data;
the first acquisition subunit comprises a second camera, and the second camera is used for acquiring the second image;
the second acquisition subunit comprises a third camera, and the third camera is used for acquiring the video data.
6. The system of claim 5, wherein the control unit is configured to:
identifying the first image to obtain the body condition parameters of the detected object;
identifying the second image to obtain a body temperature parameter of the detected object;
and identifying the video data to obtain the gait parameters of the detected object.
7. The system of claim 5, wherein the first camera comprises a depth camera and the second camera comprises an infrared camera.
8. The system of claim 1, further comprising:
a cloud platform in communication with the control unit;
the control unit is used for:
establishing a corresponding relation between the first identity information and the detection parameters, and sending the first identity information, the detection parameters and the corresponding relation to the cloud platform;
the cloud platform is used for storing the first identity information, the detection parameters and the corresponding relation.
9. The system of claim 8, wherein the control unit is configured to:
before establishing the corresponding relation between the first identity information and the detection parameters, acquiring an acquisition time stamp of the first information, a first acquisition time stamp of the first identity information and a second acquisition time stamp of second identity information, wherein the second identity information is acquired last time the first identity information is acquired;
and determining that the time interval between the acquisition time stamp and the first acquisition time stamp and the time interval between the acquisition time stamp and the second acquisition time stamp are both greater than a preset time threshold.
10. A method of detection, comprising:
when first identity information of a detected object is determined to be received, first information reflecting whether the detected object exists in a detection area is acquired;
based on the first information, when the detected object exists in the detection area, the detection information of the detected object is obtained;
and identifying the first information and the detection information to obtain the detection parameters.
11. The method of claim 10, wherein the first information comprises a first image;
determining the existence of the detected object in the detection area based on the first information, including:
determining an image threshold for the first image;
and when the image threshold is larger than a preset value, determining that the detected object exists in the detection area.
12. The method of claim 11, wherein obtaining detection information of the detected object comprises:
acquiring a second image and video data of the detected object;
identifying the first information and the detection information to obtain the detection parameters, including:
identifying the first image to obtain the body condition parameters of the detected object;
identifying the second image to obtain a body temperature parameter of the detected object;
and identifying the video data to obtain the gait parameters of the detected object.
13. The method of claim 10, wherein identifying the first information and the detection information, after obtaining the detection parameters, further comprises:
and establishing a corresponding relation between the first identity information and the detection parameters, and sending the first identity information, the detection parameters and the corresponding relation to a cloud platform.
14. The method of claim 13, wherein before establishing the correspondence between the first identity information and the detection parameter, further comprising:
acquiring a collection time stamp of the first information, a first acquisition time stamp of the first identity information and a second acquisition time stamp of second identity information, wherein the second identity information is acquired last time the first identity information is acquired;
and determining that the time interval between the acquisition time stamp and the first acquisition time stamp and the time interval between the acquisition time stamp and the second acquisition time stamp are both greater than a preset time threshold.
15. An electronic device, comprising: the system comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
the memory for storing a computer program;
the processor, executing a program stored in the memory, implementing the method of any of claims 10-14.
16. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 10-14.
17. A detection device, comprising:
the device comprises an acquisition unit, a detection unit and a processing unit, wherein the acquisition unit is used for acquiring first information reflecting whether a detected object exists in a detection area or not when determining that first identity information of the detected object is received;
a determination unit configured to acquire detection information of the detected object when it is determined that the detected object exists in the detection area based on the first information;
and the identification unit is used for identifying the first information and the detection information to obtain the detection parameters.
CN202011632752.1A 2020-12-31 2020-12-31 Detection method, device, equipment and storage medium Pending CN112841074A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011632752.1A CN112841074A (en) 2020-12-31 2020-12-31 Detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011632752.1A CN112841074A (en) 2020-12-31 2020-12-31 Detection method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112841074A true CN112841074A (en) 2021-05-28

Family

ID=76000075

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011632752.1A Pending CN112841074A (en) 2020-12-31 2020-12-31 Detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112841074A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296430A (en) * 2015-05-22 2017-01-04 中国农业科学院农业信息研究所 A kind of contactless herding sign information harvester and method
CN109632058A (en) * 2018-12-13 2019-04-16 北京小龙潜行科技有限公司 A kind of intelligent group rearing method for measuring weight, device, electronic equipment and storage medium of raising pigs
CN111034643A (en) * 2019-12-11 2020-04-21 华北水利水电大学 Physiological characteristic detection system and method for livestock breeding
CN111528135A (en) * 2020-04-15 2020-08-14 上海明略人工智能(集团)有限公司 Target object determination method and device, storage medium and electronic device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296430A (en) * 2015-05-22 2017-01-04 中国农业科学院农业信息研究所 A kind of contactless herding sign information harvester and method
CN109632058A (en) * 2018-12-13 2019-04-16 北京小龙潜行科技有限公司 A kind of intelligent group rearing method for measuring weight, device, electronic equipment and storage medium of raising pigs
WO2020119659A1 (en) * 2018-12-13 2020-06-18 北京小龙潜行科技有限公司 Intelligent pig group rearing weighing method and apparatus, electronic device and storage medium
CN111034643A (en) * 2019-12-11 2020-04-21 华北水利水电大学 Physiological characteristic detection system and method for livestock breeding
CN111528135A (en) * 2020-04-15 2020-08-14 上海明略人工智能(集团)有限公司 Target object determination method and device, storage medium and electronic device

Similar Documents

Publication Publication Date Title
CN110705405B (en) Target labeling method and device
WO2017158698A1 (en) Monitoring device, monitoring method, and monitoring program
KR20190041775A (en) Method for registration and identity verification of using companion animal’s muzzle pattern
CN111161265A (en) Animal counting and image processing method and device
CN110651728B (en) Piglet pressed detection method, device and system
EP3846114A1 (en) Animal information management system and animal information management method
CN111914685A (en) Sow oestrus detection method and device, electronic equipment and storage medium
CN114120090A (en) Image processing method, device, equipment and storage medium
CN113158773B (en) Training method and training device for living body detection model
CN107798292B (en) Object recognition method, computer program, storage medium, and electronic device
CN111898555B (en) Book checking identification method, device, equipment and system based on images and texts
CN112841074A (en) Detection method, device, equipment and storage medium
KR102172347B1 (en) Method and system for determining health status of farm livestock
CN111695445A (en) Face recognition method, device, equipment and computer readable storage medium
US20240104952A1 (en) Systems and methods for nose-based pet identification
CN115758240A (en) Livestock health state intelligent classification method, device, equipment and storage medium
CN114140751A (en) Examination room monitoring method and system
US20190304273A1 (en) Image surveillance device and method of processing images
CN111626074A (en) Face classification method and device
CN111107139A (en) Information pushing method and related product
Arago et al. Development of an Automated Cows In-Heat Detection and Monitoring System Using Image Recognition with GSM Based Notification System
US20230111876A1 (en) System and method for animal disease management
TWI781434B (en) Access control system and access control method
CN113378695A (en) Image quality determination method and device and electronic equipment
KR20240021646A (en) Ai-based pet identity recognition method and electronic device thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Technology Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant before: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210528