WO2021104124A1 - 圈养栏信息的确定方法、装置及系统、存储介质 - Google Patents

圈养栏信息的确定方法、装置及系统、存储介质 Download PDF

Info

Publication number
WO2021104124A1
WO2021104124A1 PCT/CN2020/129765 CN2020129765W WO2021104124A1 WO 2021104124 A1 WO2021104124 A1 WO 2021104124A1 CN 2020129765 W CN2020129765 W CN 2020129765W WO 2021104124 A1 WO2021104124 A1 WO 2021104124A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
captive
information
target model
network
Prior art date
Application number
PCT/CN2020/129765
Other languages
English (en)
French (fr)
Inventor
苏睿
Original Assignee
京东数科海益信息科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东数科海益信息科技有限公司 filed Critical 京东数科海益信息科技有限公司
Publication of WO2021104124A1 publication Critical patent/WO2021104124A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Forestry; Mining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/70Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in livestock or poultry

Definitions

  • This application relates to the field of image processing, and in particular, to a method, device, system, and storage medium for determining information in a captive column.
  • the pig industry has always been a traditional family industry in the vast rural areas of our country, and it occupies a very important position in the national economy and people's lives.
  • the pig production method is transforming from farming to intensive and large-scale production, but with the large-scale development of the pig industry.
  • the intelligent capture of the original pig farm equipment is particularly important, especially the information related to the pens (such as the location of the pens, The number, etc.) is currently recognized manually, which is less efficient.
  • the embodiments of the present application provide a method, a device, a system, and a storage medium for determining the information of the confinement fence, so as to at least solve the technical problem of low efficiency in the statistics of the information of the confinement fence in the related technology.
  • a method for determining the information of the enclosure including: acquiring a first image to be processed, wherein the first image is an image obtained by photographing a livestock farm, and the livestock farm is used to pass The captive fence is used to isolate the objects in captivity; the target model is used to identify the captive fence in the first image, and a second image including the identified captive fence is obtained, where the target model is preset for identification of the captive fence Semantic segmentation neural network model; de-noise processing on the second image to obtain the third image; use the third image to determine the information of the enclosure in the livestock farm.
  • a device for determining information about the enclosure including: an acquiring unit configured to acquire a first image to be processed, wherein the first image is obtained by photographing a livestock farm The livestock farm is used to isolate the objects in captivity through the enclosure; the recognition unit is set to use the target model to identify the enclosure in the first image to obtain a second image including the identified enclosure, where , The target model is a pre-set semantic segmentation neural network model for captive column recognition; the de-drying unit is set to denoise the second image to obtain the third image; the determination unit is set to use the first image The three images determine the information of the enclosure in the livestock farm.
  • a system for determining the information of the enclosure including: an image acquisition device for acquiring a first image to be processed, wherein the first image is obtained by photographing a livestock farm The livestock farm is used to isolate the objects in captivity through the enclosure; the server is used to identify the enclosure in the first image by using the target model to obtain the second image including the identified enclosure, and to the second image The image undergoes de-noising processing to obtain a third image, and the third image is used to determine the information of the confinement fence in the livestock farm, where the target model is a pre-set semantic segmentation neural network model for recognition of the confinement fence.
  • a storage medium includes a stored program, and the above-mentioned method is executed when the program is running.
  • an electronic device including a memory, a processor, and a computer program stored on the memory and capable of running on the processor, and the processor executes the above-mentioned method through the computer program.
  • FIG. 1 is a schematic diagram of a hardware environment of a method for determining information of a captive column according to an embodiment of the present application
  • FIG. 2 is a flowchart of an optional method for determining the information of the captive column according to an embodiment of the present application
  • Fig. 3 is a schematic diagram of an optional network structure according to an embodiment of the present application.
  • Fig. 4 is a schematic diagram of an optional network structure according to an embodiment of the present application.
  • Figure 5 is a schematic diagram of an optional animal husbandry scene according to an embodiment of the present application.
  • Fig. 6 is a schematic diagram of an optional model recognition result according to an embodiment of the present application.
  • FIG. 7 is a flowchart of an optional method for determining information in the captive column according to an embodiment of the present application.
  • Fig. 8 is a schematic diagram of an optional animal husbandry scene according to an embodiment of the present application.
  • Fig. 9 is a schematic diagram of an optional model recognition result according to an embodiment of the present application.
  • Fig. 10 is a schematic diagram of an optional filled connected domain according to an embodiment of the present application.
  • Fig. 11 is a schematic diagram of an optional de-desiring image according to an embodiment of the present application.
  • FIG. 12 is a schematic diagram of an optional device for determining information about the confinement column according to an embodiment of the present application.
  • Fig. 13 is a structural block diagram of a terminal according to an embodiment of the present application.
  • the above-mentioned method for determining the captive column information may be applied to the hardware environment constituted by the image acquisition device 101 and the server 103 as shown in FIG. 1.
  • the image acquisition device is used to acquire a first image to be processed, where the first image is an image obtained by photographing a livestock farm, and the livestock farm is used to isolate objects in captivity through a confinement fence;
  • the server Used to use the target model to identify the captive stalls in the first image, obtain a second image including the identified captive stalls, perform denoising processing on the second image, obtain a third image, and use the third image to determine animal husbandry
  • the server 103 is connected to the image acquisition device 101 through the network, and can be used to provide services for the image acquisition device 101 (such as image analysis services, etc.).
  • the database 105 can be set on the server or independent of the server to provide data storage services for the server 103
  • the above-mentioned network includes but is not limited to: wide area network, metropolitan area network or local area network.
  • the image acquisition device 101 is not limited to cameras, cameras, mobile phones, tablet computers, unmanned aerial vehicles carrying image sensors, etc.
  • the method for determining the captive column information in the embodiment of the present application may be executed by the server 103 (specifically, the following steps S202 to S208 may be executed), or may be executed jointly by the server 103 and the image acquisition device 101.
  • Fig. 2 is a flowchart of an optional method for determining the information of the captive column according to an embodiment of the present application. As shown in Fig. 2, the method may include the following steps:
  • Step S202 Acquire a first image to be processed.
  • the first image is an image obtained by photographing a livestock farm, and the livestock farm is used to isolate the subject in captivity through a pen.
  • the above-mentioned objects to be kept in captivity may be poultry such as pigs, chickens, sheep, etc.
  • poultry such as pigs, chickens, sheep, etc.
  • the following description will be made with pigs as an example.
  • Step S204 Use the target model to identify the captive bars in the first image to obtain a second image including the identified captive bars.
  • the target model is a pre-set semantic segmentation neural network model for captive bar recognition.
  • Step S206 Perform denoising processing on the second image to obtain a third image.
  • step S208 the third image is used to determine the information of the enclosure in the livestock farm.
  • the capture and recognition of related information of livestock farms and captive stalls is currently recognized manually, which is less efficient.
  • the image obtained by directly shooting the livestock pasture, using the target model to capture the captive in the first image The fence is roughly identified, and the second image including the identified confinement fence is obtained, and the second image is denoised to obtain the third image.
  • the third image is used to determine the information of the confinement fence in the livestock farm, so that the image can be passed.
  • the recognition method identifies the information related to the captive fence, which can solve the technical problem of low efficiency of statistically calculating the information of the captive fence in related technologies, and then achieve the technical effect of quickly identifying the related information of the captive fence.
  • a local dynamic threshold segmentation method can be used to identify the railing of the captive fence.
  • the railings can be initially extracted, and then the railings can be accurately detected through morphological processing; image segmentation based on active contour models can also be used.
  • This method transforms the image segmentation problem into a variation of solving energy minimization The problem is to first set an initial contour curve, and make this curve gradually approach the contour of the railing to be divided by minimizing the capability functional.
  • this application also provides a method for detecting the railing of the segmentation network using a semantic segmentation neural network model (such as Enet, which will be explained later by Enet), which can be applied to the animal husbandry industry, which can provide for livestock management It can be carried out in a poor environment, that is, in a scene where the light is not fixed, and the scene compatibility of the scheme is increased.
  • a semantic segmentation neural network model such as Enet, which will be explained later by Enet
  • the first image is collected by an image acquisition device.
  • the image acquisition device can be a camera fixed in the livestock farm.
  • an unmanned camera with a camera can also be used.
  • the drone can collect the first image by cruising.
  • the above-mentioned target model mainly includes two parts, namely, the first network (which can be referred to as the initial layer) and the second network (which can also be referred to as the bottleneck structure).
  • the first network which can be referred to as the initial layer
  • the second network which can also be referred to as the bottleneck structure.
  • Enet solves the problem of poor timeliness of the semantic segmentation model by reducing floating-point operations.
  • Its model architecture consists of an initial block (that is, the module where the initial layer is located) and multiple bottleneck structures (bottleneck module), such as five bottleneck structures. At this time, the first three bottleneck structures are used to encode the input image, and the latter two are used to decode the input image.
  • the first image can be compressed through the first network in the target model ,
  • the five bottleneck structures include a larger encoder (such as the first three bottleneck structures) and a smaller decoder (such as the last two bottleneck structures).
  • the volume of the network is reduced as much as possible without significantly affecting the segmentation accuracy, and the number of parameters is reduced.
  • the convolution operation can be performed on the first image through the convolutional layer of the first network, and the first image is passed through the first network.
  • the pooling layer of performs a pooling operation on the first image; through the splicing layer of the first network, the result of performing the convolution operation and the result of performing the pooling operation are spliced together to obtain the fourth image.
  • the bottleneck module can include: 1*1 projection layer, used to reduce the dimension of features; main convolution layer (conv), used to perform feature convolution; 1*1 Expansion layer; put batch normalization Regularizer and PReLU between all convolutional layers. If the bottleneck module is down-sampling, the maximum pooling layer (MaxPooling) can be added to the main branch, on the contrary, if it is up-sampling, the padding layer Padding can be added to the main branch.
  • PReLU represents the modified linear unit activation function with parameters
  • MaxPooling represents maximum pooling
  • this solution can accurately segment the railings, so as to achieve the purpose of detecting railings.
  • the detection result ie, the second image
  • FIG. 6 the detection result using this solution is shown in FIG. 6.
  • step S206 when denoising processing is performed on the second image to obtain the third image, it can be implemented through the following steps:
  • Step S2062 Perform morphological processing on the second image to obtain a fifth image.
  • the morphological processing is used to eliminate noise in the second image through expansion processing and erosion processing.
  • performing morphological processing on the second image to obtain the fifth image includes: the expansion formula is: Where A represents the second image, B represents the convolution kernel, Represents the operator of the expansion operation, x represents a point, Indicates the result of using B to expand x (B)
  • the intersection of x and A is not an empty set ⁇ , which means that when B is used to convolve A, it is guaranteed that the convolution kernel and A have an intersection, that is, volume
  • the product has a boundary;
  • the corrosion formula is: Among them, ⁇ is the operator of the corrosion operation, Represents the result of using B to corrode x (B) x belongs to A, which means that the result after convolution must be in the range of A.
  • Corrosion is carried out first, and then expansion.
  • the erosion first eliminates the noise (and shrinks the object) and then expands and expands the object again, but the noise will disappear from the previous erosion, so as to achieve the purpose of noise reduction.
  • you can extract better (less noisy) information about the shape of the object or zoom in on important features, such as corner rail detection.
  • Step S2064 Perform connected domain analysis on the fifth image to obtain a third image, where the connected domain analysis is used to eliminate noise in the fifth image.
  • performing connected domain analysis on the fifth image to obtain the third image includes: identifying connected domains in the fifth image, and filling connected domains in the fifth image with areas smaller than the first target threshold with black to obtain the first Six images; invert the pixel values of the pixels in the sixth image to obtain the seventh image; identify the connected domains in the seventh image, and fill the connected domains in the seventh image with areas smaller than the second target threshold with black , Obtain the eighth image; invert the pixel values of the pixels in the eighth image to obtain the third image.
  • the noise in the image can be further eliminated, and the required features (such as the railing) can be more obvious, and the gap between the railings can be filled.
  • the third image is used to determine the information of the confinement pens in the livestock farm, for example, the closed area is used to determine the number and position of the confinement pens, and the number of pigs in the confinement pens is further determined.
  • pigs are kept in pens as a unit, and iron railings are often used to separate the pens from the pens.
  • the algorithm for intelligent detection of the rails can be used for the subsequent smart pig farms.
  • the construction of the system provides strong support.
  • the scheme of applying the technical scheme of this application to a pig breeding farm is shown in Figure 7:
  • step S702 the railing is divided and detected.
  • the railing segmentation model (ie the target model) is obtained, and the monitoring image is input into the model to obtain the railing segmentation image.
  • the monitoring image (ie the first image) is shown in Figure 8.
  • the test results are shown in Figure 9.
  • step S704 morphological processing is performed to fill the gap between the railings.
  • A represents the railing detection image
  • B represents the convolution kernel
  • Is the operator of the expansion operation
  • is the operator of the corrosion operation.
  • Step S706 searching for connected domains.
  • Step S708 Fill in irrelevant connected domains according to the scenario.
  • step S710 the image is reversed. Traverse the pixel value of each pixel and subtract each pixel value with 255 to get the inverted image.
  • Step S712 Find and fill small connected domains. Calculate the area of the inverted image with a gray value of 255, set an appropriate threshold, fill the small area with black, and then remove the small black area.
  • step S714 the image is reversed. That is, the reverse step of S710, and the result is shown in Figure 11.
  • a deep learning segmentation model is used to detect railings to improve generalization performance; railing detection can be applied to the animal husbandry industry to help manage livestock; it provides a complete set of main stall analysis solutions in the animal husbandry industry, and provides The process of filling the main column is described.
  • railings can be effectively detected and pave the way for subsequent livestock management, such as calculation of the number of points in the pen.
  • the analysis of the main stall plays a key auxiliary role, and it can be reduced in the early stage.
  • the camera angle adjustment process can increase the accuracy of intelligent calculation of pigs in the pen in the later stage.
  • the method according to the above embodiment can be implemented by means of software plus the necessary general hardware platform, of course, it can also be implemented by hardware, but in many cases the former is Better implementation.
  • the technical solution of this application essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disk, The optical disc) includes several instructions to enable a terminal device (which can be a mobile phone, a computer, a server, or a network device, etc.) to execute the method described in each embodiment of the present application.
  • FIG. 12 is a schematic diagram of an optional device for determining the information of the confinement column according to an embodiment of the present application. As shown in FIG. 12, the device may include:
  • the acquiring unit 1201 is configured to acquire a first image to be processed, where the first image is an image obtained by photographing a livestock farm, and the livestock farm is used for isolating objects in captivity through a confinement fence;
  • the recognition unit 1203 is configured to use the target model to recognize the captive bar in the first image to obtain a second image including the recognized captive bar, wherein the target model is a preset semantics for captive bar recognition Segmentation of the neural network model;
  • the desiccation unit 1205 is configured to perform denoising processing on the second image to obtain a third image
  • the determining unit 1207 is configured to use the third image to determine the information of the enclosure in the livestock farm.
  • the acquiring unit 1201 in this embodiment may be configured to perform step S202 in the embodiment of the present application
  • the identifying unit 1203 in this embodiment may be configured to perform step S204 in the embodiment of the present application
  • the desiccation unit 1205 in the embodiment may be configured to perform step S206 in the embodiment of the present application
  • the determining unit 1207 in this embodiment may be configured to perform step S208 in the embodiment of the present application.
  • the recognition unit includes: a compression module configured to perform a compression operation on the first image through the first network in the target model to obtain a fourth image, wherein the compression operation is used to eliminate visual redundancy in the first image Information;
  • the segmentation module is set to perform semantic segmentation on the fourth image through the second network in the target model to obtain the second image.
  • the compression module may also be configured to: perform a convolution operation on the first image through the convolution layer of the first network, and perform a pooling operation on the first image through the pooling layer of the first network;
  • the splicing layer of the network splices the result of performing the convolution operation and the result of performing the pooling operation to obtain the fourth image.
  • the aforementioned desiccation unit may also be configured to: perform morphological processing on the second image to obtain a fifth image, where the morphological processing is used to eliminate noise in the second image through dilation processing and erosion processing; Connected domain analysis is performed on the fifth image to obtain a third image, where the connected domain analysis is used to eliminate noise in the fifth image.
  • the above-mentioned desiccation unit may also be configured to: perform expansion processing on the second image using an expansion formula and perform corrosion processing on the second image using an erosion formula to obtain a fifth image, and the expansion formula is: Among them, A represents the second image, B represents the convolution kernel, Represents the operator of the expansion operation, x represents a point, Indicates the result of using B to expand x (B) The intersection of x and A is not an empty set ⁇ ; the corrosion formula is: Among them, ⁇ is the operator of the corrosion operation, Represents the result of etching x with B (B) x belongs to A.
  • the above-mentioned de-drying unit may also be configured to: identify the connected domain in the fifth image, and fill the connected domain in the fifth image with an area smaller than the first target threshold with black to obtain the sixth image; The pixel values of the pixels in the six images are reversed to obtain the seventh image; the connected domains in the seventh image are identified, and the connected domains in the seventh image whose area is smaller than the second target threshold are filled with black to obtain the eighth image ; Invert the pixel value of the pixel in the eighth image to obtain the third image.
  • the above-mentioned modules can run in the hardware environment as shown in FIG. 1, and can be implemented by software or hardware, where the hardware environment includes a network environment.
  • a server or terminal for implementing the method for determining the information of the captive column.
  • FIG. 13 is a structural block diagram of a terminal according to an embodiment of the present application.
  • the terminal may include: one or more (only one is shown in FIG. 13) processor 1301, memory 1303, and transmission device 1305, as shown in FIG. 13, the terminal may also include an input and output device 1307.
  • the memory 1303 can be used to store software programs and modules, such as the program instructions/modules corresponding to the method and device for determining captive column information in the embodiments of the present application.
  • the processor 1301 runs the software programs and modules stored in the memory 1303, In this way, various functional applications and data processing are executed, that is, the above-mentioned method for determining the information of the captive column is realized.
  • the memory 1303 may include a high-speed random access memory, and may also include a non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the storage 1303 may further include a storage remotely provided with respect to the processor 1301, and these remote storages may be connected to the terminal through a network. Examples of the aforementioned networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • the aforementioned transmission device 1305 is used for receiving or sending data via a network, and can also be used for data transmission between the processor and the memory.
  • the above-mentioned specific examples of networks may include wired networks and wireless networks.
  • the transmission device 1305 includes a network adapter (Network Interface Controller, NIC), which can be connected to other network devices and routers via a network cable so as to communicate with the Internet or a local area network.
  • the transmission device 1305 is a radio frequency (RF) module, which is used to communicate with the Internet in a wireless manner.
  • RF radio frequency
  • the memory 1303 is used to store application programs.
  • the processor 1301 may call the application program stored in the memory 1303 through the transmission device 1305 to perform the following steps:
  • the first image is an image obtained by photographing a livestock farm, and the livestock farm is used to isolate the objects in captivity through a pen;
  • the target model uses the target model to identify the captive bars in the first image to obtain a second image including the identified captive bars, where the target model is a pre-set semantic segmentation neural network model for captive bar recognition;
  • the processor 1301 is further configured to perform the following steps:
  • the pixel value of the pixel in the eighth image is inverted to obtain the third image.
  • a method of "acquiring a first image to be processed” is provided, where the first image is an image obtained by shooting a livestock farm, and the livestock farm is used to isolate the objects in captivity through a pen;
  • the model recognizes the captive bars in the first image, and obtains a second image including the identified captive bars, where the target model is a semantic segmentation neural network model preset for identification of captive bars; for the second image Perform denoising processing to obtain the third image; use the third image to determine the information of the enclosure in the livestock farm" program.
  • the information related to the captive fence can be identified through image recognition, which can solve the technical problem of low efficiency of statistically calculating the information of the captive fence in related technologies, and then achieve the technical effect of quickly identifying the related information of the captive fence.
  • the structure shown in FIG. 13 is only for illustration, and the terminal may be a smart phone (such as an Android phone, an iOS phone, etc.), a tablet computer, a handheld computer, and a mobile Internet device (MID), Terminal equipment such as PAD.
  • FIG. 13 does not limit the structure of the above-mentioned electronic device.
  • the terminal may also include more or fewer components (such as a network interface, a display device, etc.) than shown in FIG. 13, or have a different configuration from that shown in FIG.
  • the program can be stored in a computer-readable storage medium, and the storage medium can be Including: flash disk, read-only memory (Read-Only Memory, ROM), random access device (Random Access Memory, RAM), magnetic disk or optical disk, etc.
  • the embodiment of the present application also provides a storage medium.
  • the above-mentioned storage medium may be used to execute the program code of the method for determining the captive column information.
  • the foregoing storage medium may be located on at least one of the multiple network devices in the network shown in the foregoing embodiment.
  • the storage medium is configured to store program code for executing the following steps:
  • the first image is an image obtained by photographing a livestock farm, and the livestock farm is used to isolate the objects in captivity through a pen;
  • the target model uses the target model to identify the captive bars in the first image to obtain a second image including the identified captive bars, where the target model is a pre-set semantic segmentation neural network model for captive bar recognition;
  • the storage medium is also configured to store program code for executing the following steps:
  • the pixel value of the pixel in the eighth image is inverted to obtain the third image.
  • the foregoing storage medium may include, but is not limited to: U disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), mobile hard disk, magnetic disk Various media that can store program codes, such as discs or optical discs.
  • the integrated unit in the foregoing embodiment is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in the foregoing computer-readable storage medium.
  • the technical solution of the present application essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, It includes several instructions to make one or more computer devices (which may be personal computers, servers, or network devices, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the disclosed client can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division.
  • there may be other division methods for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, units or modules, and may be in electrical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or they may be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Software Systems (AREA)
  • Agronomy & Crop Science (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Animal Husbandry (AREA)
  • Marine Sciences & Fisheries (AREA)
  • Mining & Mineral Resources (AREA)
  • Artificial Intelligence (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)

Abstract

一种圈养栏信息的确定方法、装置及系统、存储介质,该方法包括:获取待处理的第一图像,第一图像是对畜牧场进行拍摄得到的图像,畜牧场用于通过圈养栏来隔离被圈养的对象(S202);利用目标模型对第一图像中的圈养栏进行识别,得到包括识别出的圈养栏的第二图像,目标模型是预先设置好的用于进行圈养栏识别的语义分割神经网络模型(S204);对第二图像进行去噪声处理,得到第三图像(S206);利用第三图像确定畜牧场中圈养栏的信息(S208)。所述方法提高了圈养栏信息统计的效率。

Description

圈养栏信息的确定方法、装置及系统、存储介质
本申请要求于提交中国专利局,优先权号为201911176750.3、发明名称“圈养栏信息的确定方法、装置及系统、存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理领域,具体而言,涉及一种圈养栏信息的确定方法、装置及系统、存储介质。
背景技术
养猪业一直是我国广大农村的传统家庭产业,在国民经济及人们生活中占有十分重要的地位,近年来,随着农村经济体制改革的进一步深入和市场经济的迅速发展,新农村建设的步伐逐步加快,农村人居住环境和养猪生产方式正在发生巨大改变,养猪生产方式正由农户养殖向集约化、规模化转型,但是随着养猪产业的规模化发展。
针对相关技术中猪养殖场的智能化转型中,猪场已有设备变更显得较为麻烦,因此针对原猪场设备的智能捕获,显得尤为重要,尤其是圈养栏相关信息(如圈养栏的位置、数量等)的捕捉识别,目前是通过人工方式识别,效率较低。
类似地,对于鸡、鸭、羊等家禽的养殖中也存在类似的问题。
针对上述的问题,目前尚未提出有效的解决方案。
发明内容
本申请实施例提供了一种圈养栏信息的确定方法、装置及系统、存储介质,以至少解决相关技术中统计圈养栏信息的效率较低的技术问题。
根据本申请实施例的一个方面,提供了一种圈养栏信息的确定方 法,包括:获取待处理的第一图像,其中,第一图像是对畜牧场进行拍摄得到的图像,畜牧场用于通过圈养栏来隔离被圈养的对象;利用目标模型对第一图像中的圈养栏进行识别,得到包括识别出的圈养栏的第二图像,其中,目标模型是预先设置好的用于进行圈养栏识别的语义分割神经网络模型;对第二图像进行去噪声处理,得到第三图像;利用第三图像确定畜牧场中圈养栏的信息。
根据本申请实施例的另一方面,还提供了一种圈养栏信息的确定装置,包括:获取单元,被设置为获取待处理的第一图像,其中,第一图像是对畜牧场进行拍摄得到的图像,畜牧场用于通过圈养栏来隔离被圈养的对象;识别单元,被设置为利用目标模型对第一图像中的圈养栏进行识别,得到包括识别出的圈养栏的第二图像,其中,目标模型是预先设置好的用于进行圈养栏识别的语义分割神经网络模型;去燥单元,被设置为对第二图像进行去噪声处理,得到第三图像;确定单元,被设置为利用第三图像确定畜牧场中圈养栏的信息。
根据本申请实施例的另一方面,还提供了一种圈养栏信息的确定系统,包括:图像采集设备,用于采集待处理的第一图像,其中,第一图像是对畜牧场进行拍摄得到的图像,畜牧场用于通过圈养栏来隔离被圈养的对象;服务器,用于利用目标模型对第一图像中的圈养栏进行识别,得到包括识别出的圈养栏的第二图像,对第二图像进行去噪声处理,得到第三图像,并利用第三图像确定畜牧场中圈养栏的信息,其中,目标模型是预先设置好的用于进行圈养栏识别的语义分割神经网络模型。
根据本申请实施例的另一方面,还提供了一种存储介质,该存储介质包括存储的程序,程序运行时执行上述的方法。
根据本申请实施例的另一方面,还提供了一种电子装置,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序, 处理器通过计算机程序执行上述的方法。
在本申请实施例中,直接对畜牧场进行拍摄得到的图像,利用目标模型对第一图像中的圈养栏进行大概识别,得到包括识别出的圈养栏的第二图像,并对第二图像进行去噪声处理,得到第三图像,然后利用第三图像确定畜牧场中圈养栏的信息,从而可以通过图像识别的方式识别出圈养栏相关的信息,可以解决相关技术中统计圈养栏信息的效率较低的技术问题,进而达到快速识别圈养栏相关信息的技术效果。
附图说明
此处所说明的附图用来提供对本申请的进一步理解,构成本申请的一部分,本申请的示意性实施例及其说明用于解释本申请,并不构成对本申请的不当限定。在附图中:
图1是根据本申请实施例的圈养栏信息的确定方法的硬件环境的示意图;
图2是根据本申请实施例的一种可选的圈养栏信息的确定方法的流程图;
图3是根据本申请实施例的一种可选的网络结构的示意图;
图4是根据本申请实施例的一种可选的网络结构的示意图;
图5是根据本申请实施例的一种可选的畜牧场景的示意图;
图6是根据本申请实施例的一种可选的模型识别结果的示意图;
图7是根据本申请实施例的一种可选的圈养栏信息的确定方法的流程图;
图8是根据本申请实施例的一种可选的畜牧场景的示意图;
图9是根据本申请实施例的一种可选的模型识别结果的示意图;
图10是根据本申请实施例的一种可选的填充后的连通域的示意图;
图11是根据本申请实施例的一种可选的去燥图像的示意图;
图12是根据本申请实施例的一种可选的圈养栏信息的确定装置的示意图;以及,
图13是根据本申请实施例的一种终端的结构框图。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都应当属于本申请保护的范围。
需要说明的是,本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本申请的实施例能够以除了在这里图示或描述的那些以外的顺序实施。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,例如,包含了一系列步骤或单元的过程、方法、系统、产品或设备不必限于清楚地列出的那些步骤或单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它步骤或单元。
根据本申请实施例的一方面,提供了一种圈养栏信息的确定方法的方法实施例。
可选地,在本实施例中,上述圈养栏信息的确定方法可以应用于如图1所示的由图像采集设备101和服务器103所构成的硬件环境中。 如图1所示,图像采集设备,用于采集待处理的第一图像,其中,第一图像是对畜牧场进行拍摄得到的图像,畜牧场用于通过圈养栏来隔离被圈养的对象;服务器,用于利用目标模型对第一图像中的圈养栏进行识别,得到包括识别出的圈养栏的第二图像,对第二图像进行去噪声处理,得到第三图像,并利用第三图像确定畜牧场中圈养栏的信息,其中,目标模型是预先设置好的用于进行圈养栏识别的语义分割神经网络模型。服务器103通过网络与图像采集设备101进行连接,可用于为图像采集设备101提供服务(如图像分析服务等),可在服务器上或独立于服务器设置数据库105,用于为服务器103提供数据存储服务,上述网络包括但不限于:广域网、城域网或局域网,图像采集设备101并不限定于摄像头、相机、手机、平板电脑、载有图像传感器的无人机等。
本申请实施例的圈养栏信息的确定方法可以由服务器103来执行(具体可执行下述步骤S202至步骤S208),也可以由服务器103和图像采集设备101共同执行。图2是根据本申请实施例的一种可选的圈养栏信息的确定方法的流程图,如图2所示,该方法可以包括以下步骤:
步骤S202,获取待处理的第一图像,第一图像是对畜牧场进行拍摄得到的图像,畜牧场用于通过圈养栏来隔离被圈养的对象。
上述被圈养的对象可以为猪、鸡、羊等家禽,后续以猪为例进行说明。
步骤S204,利用目标模型对第一图像中的圈养栏进行识别,得到包括识别出的圈养栏的第二图像,目标模型是预先设置好的用于进行圈养栏识别的语义分割神经网络模型。
步骤S206,对第二图像进行去噪声处理,得到第三图像。
步骤S208,利用第三图像确定畜牧场中圈养栏的信息。
相关技术中的畜牧场,圈养栏相关信息的捕捉识别,目前是通过人工方式识别,效率较低,通过上述步骤,直接对畜牧场进行拍摄得到的图像,利用目标模型对第一图像中的圈养栏进行大概识别,得到包括识别出的圈养栏的第二图像,并对第二图像进行去噪声处理,得到第三图像,然后利用第三图像确定畜牧场中圈养栏的信息,从而可以通过图像识别的方式识别出圈养栏相关的信息,可以解决相关技术中统计圈养栏信息的效率较低的技术问题,进而达到快速识别圈养栏相关信息的技术效果。
在一种可选的方案中,可以采用局部动态阈值分割的方式来进行圈养栏的栏杆识别,当栏杆外框像素的灰度基本一致,且与周围环境像素的灰度有较大差别时,通过设置合适的阈值条件,可初步提取到栏杆,然后通过形态学处理,可准确检测到栏杆;也可基于活动轮廓模型的图像分割,此法将图像分割问题转化为求解能量最小化的变分问题,首先设定一条初始轮廓曲线,通过最小化能力泛函,使这条曲线逐步逼近待分割栏杆的轮廓。
但是考虑到畜牧场环境较为复杂,若采用上述第一种方案,当栏杆灰度不同,或栏杆与周围环境像素的灰度差别不大时,局部动态阈值分割法检测效果不佳;若采用第二种方案,当场景移植时,摄像头角度,光照条件都会发生很大变换,这种方案不具备泛化性。
为了克服上述缺陷,本申请还提供了一种使用义分割神经网络模型(如Enet,后续以Enet为例进行说明)分割网络的栏杆的检测方案,将其应用于畜牧业,可对牲畜管理提供助力,能够在较差环境中进行,即光线不固场景下,增加了方案的场景兼容性。下面结合图2所示的步骤进一步详述本申请的技术方案。
在步骤S202提供的技术方案中,通过图像采集设备采集第一图像,图像采集设备可以为固定在畜牧场内的摄像头,为了节约成本和提高 采集的针对性,也可以采用设有摄像头的无人机来采集,如无人机可以通过巡航的方式来采集第一图像。
在步骤S204提供的技术方案中,上述目标模型主要包括两个部分,即第一网络(可称为初始层initial层)和第二网络(也可称为瓶颈结构),以Enet为例,为了满足实际应用中对方案的时效性极高的要求,Enet通过减少浮点运算解决了语义分割模型时效性差的问题,其模型体系结构由初始块(即initial层所在的模块)和多个瓶颈结构(bottleneck module)组成,如五个瓶颈结构,此时前三个瓶颈结构用于编码输入图像,后面两个用于解码输入图像。
在本申请的方案,在利用目标模型对第一图像中的圈养栏进行识别,得到包括识别出的圈养栏的第二图像时,可通过目标模型中的第一网络对第一图像执行压缩操作,得到第四图像,压缩操作用于消除第一图像中的视觉冗余信息(描述信源的数据是信息和数据冗余之和,即,数据=信息+数据冗余,视觉冗余是图像数据中普遍存在的一种数据冗余),如利用initial层对图像进行压缩,过滤图像中的视觉冗余信息,如图3所示,并行通过大小为3*3、步长stride为2的卷积核与最大池化层MaxPooling,将卷积核与最大池化层对输入处理得到的两个结果在深度(channel)通道上串联concat在一起;通过目标模型中的第二网络对第四图像进行语义分割,得到第二图像,如在Enet中,五个瓶颈结构中包括较大的编码器(如前三个瓶颈结构)和较小的解码器(如后两个瓶颈结构),在不明显影响分割精度的情况下尽可能地缩小了网络的体积,减少了参数数量。
可选地,在通过目标模型中的第一网络对第一图像执行压缩操作,得到第四图像时,可通过第一网络的卷积层对第一图像执行卷积操作,并通过第一网络的池化层对第一图像执行池化操作;通过第一网络的拼接层将执行卷积操作得到的结果和执行池化操作的结果拼接起来,得到第四图像。
以Enet为例,瓶颈结构如图4所示,瓶颈模块可包括:1*1的投影层,用于降低特征的维度;主卷积层(conv),用于进行特征卷积;1*1的扩展层;在所有卷积层之间放置批量标准化Regularizer和PReLU。如果瓶颈模块是下采样,则可将最大池化层(MaxPooling)添加到主分支,相反如果是上采样,则将填充层Padding添加到主分支。第一个1*1的投影可被替换为2*2卷积,步长stride=2,对于regularizer(正则化器),可使用Spatial Dropout。
在图4中,可以进行批量归一化BN处理,PReLU表示带参数的修正线性单元激活函数;MaxPooling表示最大池化。
在实际场景中,本方案可以对栏杆进行精确分割,从而达到检测栏杆的目的,在图5所示场景中,使用本方案的检测结果(即第二图像)如图6所示。
在步骤S206提供的技术方案中,在对第二图像进行去噪声处理,得到第三图像时,可以通过如下步骤实现:
步骤S2062,对第二图像进行形态学处理,得到第五图像,形态学处理用于通过膨胀处理和腐蚀处理消除第二图像中的噪声。
可选地,对第二图像进行形态学处理,得到第五图像包括:膨胀公式为:
Figure PCTCN2020129765-appb-000001
其中,A表示所述第二图像,B表示卷积核,
Figure PCTCN2020129765-appb-000002
表示膨胀运算的运算符,x表示一个点,
Figure PCTCN2020129765-appb-000003
表示利用B对x进行膨胀处理的结果(B) x与A的交集不为空集Φ,意思是在用B对A做卷积的时候,保证卷积核和A是有交集的,即卷积有边界;腐蚀公式为:
Figure PCTCN2020129765-appb-000004
其中,Θ是腐蚀运算的运算符,
Figure PCTCN2020129765-appb-000005
表示利用B对x进行腐蚀处理的结果(B) x属于A,表示卷积后的结果要在A范围内。
先进行腐蚀,然后是膨胀,其中侵蚀首先消除噪声(并收缩物体)然后扩张再次扩大物体,但噪声将从先前的侵蚀中消失,从而达到降 噪的目的。通过这些操作可提取关于物体形状的更好(更少噪声)信息或放大重要特征,如角点栏杆检测的情况。
步骤S2064,对第五图像进行连通域分析,得到第三图像,其中,连通域分析用于消除第五图像中的噪声。
可选地,对第五图像进行连通域分析,得到第三图像包括:识别出第五图像中的连通域,并将第五图像中面积小于第一目标阈值的连通域用黑色填充,得到第六图像;对第六图像中像素点的像素值进行取反,得到第七图像;识别出第七图像中的连通域,并将第七图像中面积小于第二目标阈值的连通域用黑色填充,得到第八图像;对第八图像中像素点的像素值进行取反,得到第三图像。
通过上述方案,可以进一步消除图像中的噪声,并让需要的特征(如栏杆)更为明显,填补栏杆间空缺。
在步骤S208提供的技术方案中,利用第三图像确定畜牧场中圈养栏的信息,如利用闭合的区域来确定圈养栏的数量和位置,进一步确定圈养栏中猪的数量等。
作为一种可选的实施例,在智能猪场系统中,猪只均以栏为单位进行圈养,栏与栏之间多使用铁制栏杆进行分隔,智能检测栏杆的算法可以为后续智能猪场系统的搭建提供有力支持。本申请的技术方案应用于猪养殖场的方案如图7所示:
步骤S702,分割检测栏杆。
使用深度学习方法分割检测栏杆,使用Enet网络经过训练,得到栏杆分割模型(即目标模型),将监控图像输入模型,得到栏杆分割图像,监控图像(即第一图像)如图8所示,栏杆检测结果如图9所示。
步骤S704,形态学处理,填补栏杆间空缺。
对检测结果进行形态学处理。通过膨胀和腐蚀,将图像的白色部 分扩张或缩减,以填补栏杆间空缺。图像膨胀和腐蚀公式分别为:
Figure PCTCN2020129765-appb-000006
Figure PCTCN2020129765-appb-000007
其中,A表示栏杆检测图像,B表示卷积核,
Figure PCTCN2020129765-appb-000008
是膨胀运算的运算符,Θ是腐蚀运算的运算符。
步骤S706,寻找连通域。
寻找图像连通域。通过对二值图像中白色像素(目标)的标记,让每个单独的连通区域形成一个被标识的块。
步骤S708,根据场景,填充无关连通域。
根据场景,填充无关连通域,将面积较小的连通域填充成黑色,进一步确定主栏位的位置,填充结果如图10所示。
步骤S710,图像取反。遍历每个像素的像素值,用255去减去每个像素值,得到取反图像。
步骤S712,寻找并填充小连通域。对取反后的图像计算灰度值为255的面积,设置合适阈值,填充小面积区域为黑色,然后把小黑色区域去掉。
步骤S714,图像取反。即S710的相反步骤,结果如图11所示。
在本申请的技术方案中,使用深度学习分割模型检测栏杆,提高泛化性能;可将栏杆检测应用于畜牧业,帮助管理牲畜;提供了整套主栏位分析在畜牧业的落地方案,并提供了主栏位的填充方案流程。采用本申请的技术方案,可以有效地检测出栏杆,为后续牲畜管理做铺垫,例如栏内点数计算,智能猪场系统部署中,主栏位分析起到关键性的辅助作用,如前期可以减少摄像头角度的调试过程,到后期可以增加栏内猪智能计算的准确率。
需要说明的是,对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本申请并不受所描述的动作顺序的限制,因为依据本申请,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本申请所必须的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到根据上述实施例的方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
根据本申请实施例的另一个方面,还提供了一种用于实施上述圈养栏信息的确定方法的圈养栏信息的确定装置。图12是根据本申请实施例的一种可选的圈养栏信息的确定装置的示意图,如图12所示,该装置可以包括:
获取单元1201,被设置为获取待处理的第一图像,其中,第一图像是对畜牧场进行拍摄得到的图像,畜牧场用于通过圈养栏来隔离被圈养的对象;
识别单元1203,被设置为利用目标模型对第一图像中的圈养栏进行识别,得到包括识别出的圈养栏的第二图像,其中,目标模型是预先设置好的用于进行圈养栏识别的语义分割神经网络模型;
去燥单元1205,被设置为对第二图像进行去噪声处理,得到第三图像;
确定单元1207,被设置为利用第三图像确定畜牧场中圈养栏的信息。
需要说明的是,该实施例中的获取单元1201可以被设置为执行本申请实施例中的步骤S202,该实施例中的识别单元1203可以被设置为执行本申请实施例中的步骤S204,该实施例中的去燥单元1205可以被设置为执行本申请实施例中的步骤S206,该实施例中的确定单元1207可以被设置为执行本申请实施例中的步骤S208。
此处需要说明的是,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在如图1所示的硬件环境中,可以通过软件实现,也可以通过硬件实现。
通过上述模块,直接对畜牧场进行拍摄得到的图像,利用目标模型对第一图像中的圈养栏进行大概识别,得到包括识别出的圈养栏的第二图像,并对第二图像进行去噪声处理,得到第三图像,然后利用第三图像确定畜牧场中圈养栏的信息,从而可以通过图像识别的方式识别出圈养栏相关的信息,可以解决相关技术中统计圈养栏信息的效率较低的技术问题,进而达到快速识别圈养栏相关信息的技术效果。
可选地,识别单元包括:压缩模块,被设置为通过目标模型中的第一网络对第一图像执行压缩操作,得到第四图像,其中,压缩操作用于消除第一图像中的视觉冗余信息;分割模块,被设置为通过目标模型中的第二网络对第四图像进行语义分割,得到第二图像。
可选地,压缩模块还可被配置为:通过第一网络的卷积层对第一图像执行卷积操作,并通过第一网络的池化层对第一图像执行池化操作;通过第一网络的拼接层将执行卷积操作得到的结果和执行池化操作的结果拼接起来,得到第四图像。
可选地,上述去燥单元还可被设置为:对第二图像进行形态学处 理,得到第五图像,其中,形态学处理用于通过膨胀处理和腐蚀处理消除第二图像中的噪声;对第五图像进行连通域分析,得到第三图像,其中,连通域分析用于消除第五图像中的噪声。
可选地,上述去燥单元还可被设置为:利用膨胀公式对第二图像进行膨胀处理并利用腐蚀公式对第二图像进行腐蚀处理,得到第五图像,膨胀公式为:
Figure PCTCN2020129765-appb-000009
其中,A表示第二图像,B表示卷积核,
Figure PCTCN2020129765-appb-000010
表示膨胀运算的运算符,x表示一个点,
Figure PCTCN2020129765-appb-000011
表示利用B对x进行膨胀处理的结果(B) x与A的交集不为空集Φ;腐蚀公式为:
Figure PCTCN2020129765-appb-000012
其中,Θ是腐蚀运算的运算符,
Figure PCTCN2020129765-appb-000013
表示利用B对x进行腐蚀处理的结果(B) x属于A。
可选地,上述去燥单元还可被设置为:识别出第五图像中的连通域,并将第五图像中面积小于第一目标阈值的连通域用黑色填充,得到第六图像;对第六图像中像素点的像素值进行取反,得到第七图像;识别出第七图像中的连通域,并将第七图像中面积小于第二目标阈值的连通域用黑色填充,得到第八图像;对第八图像中像素点的像素值进行取反,得到第三图像。
此处需要说明的是,上述模块与对应的步骤所实现的示例和应用场景相同,但不限于上述实施例所公开的内容。需要说明的是,上述模块作为装置的一部分可以运行在如图1所示的硬件环境中,可以通过软件实现,也可以通过硬件实现,其中,硬件环境包括网络环境。
根据本申请实施例的另一个方面,还提供了一种用于实施上述圈养栏信息的确定方法的服务器或终端。
图13是根据本申请实施例的一种终端的结构框图,如图13所示,该终端可以包括:一个或多个(图13中仅示出一个)处理器1301、存储器1303、以及传输装置1305,如图13所示,该终端还可以包括输入输出设备1307。
其中,存储器1303可用于存储软件程序以及模块,如本申请实施例中的圈养栏信息的确定方法和装置对应的程序指令/模块,处理器1301通过运行存储在存储器1303内的软件程序以及模块,从而执行各种功能应用以及数据处理,即实现上述的圈养栏信息的确定方法。存储器1303可包括高速随机存储器,还可以包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器1303可进一步包括相对于处理器1301远程设置的存储器,这些远程存储器可以通过网络连接至终端。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
上述的传输装置1305用于经由一个网络接收或者发送数据,还可以用于处理器与存储器之间的数据传输。上述的网络具体实例可包括有线网络及无线网络。在一个实例中,传输装置1305包括一个网络适配器(Network Interface Controller,NIC),其可通过网线与其他网络设备与路由器相连从而可与互联网或局域网进行通讯。在一个实例中,传输装置1305为射频(Radio Frequency,RF)模块,其用于通过无线方式与互联网进行通讯。
其中,可选地,存储器1303用于存储应用程序。
处理器1301可以通过传输装置1305调用存储器1303存储的应用程序,以执行下述步骤:
获取待处理的第一图像,其中,第一图像是对畜牧场进行拍摄得到的图像,畜牧场用于通过圈养栏来隔离被圈养的对象;
利用目标模型对第一图像中的圈养栏进行识别,得到包括识别出的圈养栏的第二图像,其中,目标模型是预先设置好的用于进行圈养栏识别的语义分割神经网络模型;
对第二图像进行去噪声处理,得到第三图像;
利用第三图像确定畜牧场中圈养栏的信息。
处理器1301还用于执行下述步骤:
识别出第五图像中的连通域,并将第五图像中面积小于第一目标阈值的连通域用黑色填充,得到第六图像;
对第六图像中像素点的像素值进行取反,得到第七图像;
识别出第七图像中的连通域,并将第七图像中面积小于第二目标阈值的连通域用黑色填充,得到第八图像;
对第八图像中像素点的像素值进行取反,得到第三图像。
采用本申请实施例,提供了一种“获取待处理的第一图像,其中,第一图像是对畜牧场进行拍摄得到的图像,畜牧场用于通过圈养栏来隔离被圈养的对象;利用目标模型对第一图像中的圈养栏进行识别,得到包括识别出的圈养栏的第二图像,其中,目标模型是预先设置好的用于进行圈养栏识别的语义分割神经网络模型;对第二图像进行去噪声处理,得到第三图像;利用第三图像确定畜牧场中圈养栏的信息”的方案。可以通过图像识别的方式识别出圈养栏相关的信息,可以解决相关技术中统计圈养栏信息的效率较低的技术问题,进而达到快速识别圈养栏相关信息的技术效果。
可选地,本实施例中的具体示例可以参考上述实施例中所描述的示例,本实施例在此不再赘述。
本领域普通技术人员可以理解,图13所示的结构仅为示意,终端可以是智能手机(如Android手机、iOS手机等)、平板电脑、掌上电脑以及移动互联网设备(Mobile Internet Devices,MID)、PAD等终端设备。图13其并不对上述电子装置的结构造成限定。例如,终端还可包括比图13中所示更多或者更少的组件(如网络接口、显示装置等),或者具有与图13所示不同的配置。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令终端设备相关的硬件来完成,该程序 可以存储于一计算机可读存储介质中,存储介质可以包括:闪存盘、只读存储器(Read-Only Memory,ROM)、随机存取器(Random Access Memory,RAM)、磁盘或光盘等。
本申请的实施例还提供了一种存储介质。可选地,在本实施例中,上述存储介质可以用于执行圈养栏信息的确定方法的程序代码。
可选地,在本实施例中,上述存储介质可以位于上述实施例所示的网络中的多个网络设备中的至少一个网络设备上。
可选地,在本实施例中,存储介质被设置为存储用于执行以下步骤的程序代码:
获取待处理的第一图像,其中,第一图像是对畜牧场进行拍摄得到的图像,畜牧场用于通过圈养栏来隔离被圈养的对象;
利用目标模型对第一图像中的圈养栏进行识别,得到包括识别出的圈养栏的第二图像,其中,目标模型是预先设置好的用于进行圈养栏识别的语义分割神经网络模型;
对第二图像进行去噪声处理,得到第三图像;
利用第三图像确定畜牧场中圈养栏的信息。
可选地,存储介质还被设置为存储用于执行以下步骤的程序代码:
识别出第五图像中的连通域,并将第五图像中面积小于第一目标阈值的连通域用黑色填充,得到第六图像;
对第六图像中像素点的像素值进行取反,得到第七图像;
识别出第七图像中的连通域,并将第七图像中面积小于第二目标阈值的连通域用黑色填充,得到第八图像;
对第八图像中像素点的像素值进行取反,得到第三图像。
可选地,本实施例中的具体示例可以参考上述实施例中所描述的 示例,本实施例在此不再赘述。
可选地,在本实施例中,上述存储介质可以包括但不限于:U盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、移动硬盘、磁碟或者光盘等各种可以存储程序代码的介质。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
上述实施例中的集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在上述计算机可读取的存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在存储介质中,包括若干指令用以使得一台或多台计算机设备(可为个人计算机、服务器或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。
在本申请的上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。
在本申请所提供的几个实施例中,应该理解到,所揭露的客户端,可通过其它的方式实现。其中,以上所描述的装置实施例仅仅是示意性的,例如所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,单元或模块的间接耦合或通信连接,可以是电性或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
以上所述仅是本申请的优选实施方式,应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以做出若干改进和润饰,这些改进和润饰也应视为本申请的保护范围。

Claims (11)

  1. 一种圈养栏信息的确定方法,包括:
    获取待处理的第一图像,其中,所述第一图像是对畜牧场进行拍摄得到的图像,所述畜牧场用于通过圈养栏来隔离被圈养的对象;
    利用目标模型对所述第一图像中的圈养栏进行识别,得到包括识别出的圈养栏的第二图像,其中,所述目标模型是预先设置好的用于进行圈养栏识别的语义分割神经网络模型;
    对所述第二图像进行去噪声处理,得到第三图像;
    利用所述第三图像确定所述畜牧场中圈养栏的信息。
  2. 根据权利要求1所述的方法,其中,利用目标模型对所述第一图像中的圈养栏进行识别,得到包括识别出的圈养栏的第二图像包括:
    通过所述目标模型中的第一网络对所述第一图像执行压缩操作,得到第四图像,其中,所述压缩操作用于消除所述第一图像中的视觉冗余信息;
    通过所述目标模型中的第二网络对所述第四图像进行语义分割,得到所述第二图像。
  3. 根据权利要求2所述的方法,其中,通过所述目标模型中的第一网络对所述第一图像执行压缩操作,得到第四图像包括:
    通过所述第一网络的卷积层对所述第一图像执行卷积操作,并通过所述第一网络的池化层对所述第一图像执行池化操作;
    通过所述第一网络的拼接层将执行所述卷积操作得到的结果和执行所述池化操作的结果拼接起来,得到所述第四图像。
  4. 根据权利要求1至3中任意一项所述的方法,其中,对所述第二图像进行去噪声处理,得到第三图像包括:
    对所述第二图像进行形态学处理,得到第五图像,其中,所述形态学处理用于通过膨胀处理和腐蚀处理消除所述第二图像中的噪声;
    对所述第五图像进行连通域分析,得到所述第三图像,其中,所述连通域分析用于消除所述第五图像中的噪声。
  5. 根据权利要求4所述的方法,其中,对所述第二图像进行形态学处理,得到第五图像包括:
    利用膨胀公式对所述第二图像进行膨胀处理并利用腐蚀公式对所述第二图像进行腐蚀处理,得到所述第五图像,
    所述膨胀公式为:
    Figure PCTCN2020129765-appb-100001
    其中,A表示所述第二图像,B表示卷积核,
    Figure PCTCN2020129765-appb-100002
    表示膨胀运算的运算符,x表示一个点,(B) x∩A≠Φ表示利用B对x进行膨胀处理的结果(B) x与A的交集不为空集Φ;
    所述腐蚀公式为:
    Figure PCTCN2020129765-appb-100003
    其中,Θ是腐蚀运算的运算符,
    Figure PCTCN2020129765-appb-100004
    表示利用B对x进行腐蚀处理的结果(B) x属于A。
  6. 根据权利要求4所述的方法,其中,对所述第五图像进行连通域分析,得到所述第三图像包括:
    识别出所述第五图像中的连通域,并将所述第五图像中面积小于第一目标阈值的连通域用黑色填充,得到第六图像;
    对所述第六图像中像素点的像素值进行取反,得到第七图像;
    识别出所述第七图像中的连通域,并将所述第七图像中面积小于第二目标阈值的连通域用黑色填充,得到第八图像;
    对所述第八图像中像素点的像素值进行取反,得到所述第三图像。
  7. 一种圈养栏信息的确定装置,包括:
    获取单元,被设置为获取待处理的第一图像,其中,所述第一图 像是对畜牧场进行拍摄得到的图像,所述畜牧场用于通过圈养栏来隔离被圈养的对象;
    识别单元,被设置为利用目标模型对所述第一图像中的圈养栏进行识别,得到包括识别出的圈养栏的第二图像,其中,所述目标模型是预先设置好的用于进行圈养栏识别的语义分割神经网络模型;
    去燥单元,被设置为对所述第二图像进行去噪声处理,得到第三图像;
    确定单元,被设置为利用所述第三图像确定所述畜牧场中圈养栏的信息。
  8. 根据权利要求7所述的装置,其中,所述识别单元包括:
    压缩模块,被设置为通过所述目标模型中的第一网络对所述第一图像执行压缩操作,得到第四图像,其中,所述压缩操作用于消除所述第一图像中的视觉冗余信息;
    分割模块,被设置为通过所述目标模型中的第二网络对所述第四图像进行语义分割,得到所述第二图像。
  9. 一种圈养栏信息的确定系统,包括:
    图像采集设备,用于采集待处理的第一图像,其中,所述第一图像是对畜牧场进行拍摄得到的图像,所述畜牧场用于通过圈养栏来隔离被圈养的对象;
    服务器,用于利用目标模型对所述第一图像中的圈养栏进行识别,得到包括识别出的圈养栏的第二图像,对所述第二图像进行去噪声处理,得到第三图像,并利用所述第三图像确定所述畜牧场中圈养栏的信息,其中,所述目标模型是预先设置好的用于进行圈养栏识别的语义分割神经网络模型。
  10. 一种存储介质,所述存储介质包括存储的程序,其中,所述程序运行时执行上述权利要求1至6任一项中所述的方法。
  11. 一种电子装置,包括存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述处理器通过所述计算机程序执行上述权利要求1至6任一项中所述的方法。
PCT/CN2020/129765 2019-11-26 2020-11-18 圈养栏信息的确定方法、装置及系统、存储介质 WO2021104124A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911176750.3A CN111161090B (zh) 2019-11-26 2019-11-26 圈养栏信息的确定方法、装置及系统、存储介质
CN201911176750.3 2019-11-26

Publications (1)

Publication Number Publication Date
WO2021104124A1 true WO2021104124A1 (zh) 2021-06-03

Family

ID=70556172

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/129765 WO2021104124A1 (zh) 2019-11-26 2020-11-18 圈养栏信息的确定方法、装置及系统、存储介质

Country Status (2)

Country Link
CN (1) CN111161090B (zh)
WO (1) WO2021104124A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114189627A (zh) * 2021-11-24 2022-03-15 河南牧原智能科技有限公司 用于获取相机预置角度、监测养殖栏的方法及产品

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111161090B (zh) * 2019-11-26 2022-12-27 京东科技信息技术有限公司 圈养栏信息的确定方法、装置及系统、存储介质
CN111539384B (zh) * 2020-05-26 2023-05-30 京东科技信息技术有限公司 牧场采食监控方法、系统、装置、设备及存储介质
CN113836982A (zh) * 2020-06-24 2021-12-24 阿里巴巴集团控股有限公司 图像处理方法、装置、存储介质及计算机设备
CN112465722A (zh) * 2020-12-04 2021-03-09 武汉大学 一种异常相位图像的修复方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506768A (zh) * 2017-10-11 2017-12-22 电子科技大学 一种基于全卷积神经网络的输电线路导线断股识别方法
CN109740465A (zh) * 2018-12-24 2019-05-10 南京理工大学 一种基于实例分割神经网络框架的车道线检测算法
CN110148135A (zh) * 2019-04-03 2019-08-20 深兰科技(上海)有限公司 一种路面分割方法、装置、设备及介质
CN111161090A (zh) * 2019-11-26 2020-05-15 北京海益同展信息科技有限公司 圈养栏信息的确定方法、装置及系统、存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10067509B1 (en) * 2017-03-10 2018-09-04 TuSimple System and method for occluding contour detection
CN107862294B (zh) * 2017-11-21 2021-05-18 北京中科慧眼科技有限公司 一种基于形态学重建的车道线检测方法与装置
CN109523543B (zh) * 2018-11-26 2023-01-03 西安工程大学 一种基于边缘距离的导线断股检测方法
CN109711341B (zh) * 2018-12-27 2021-03-09 宽凳(北京)科技有限公司 一种虚拟车道线识别方法及装置、设备、介质
CN110335277A (zh) * 2019-05-07 2019-10-15 腾讯科技(深圳)有限公司 图像处理方法、装置、计算机可读存储介质和计算机设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506768A (zh) * 2017-10-11 2017-12-22 电子科技大学 一种基于全卷积神经网络的输电线路导线断股识别方法
CN109740465A (zh) * 2018-12-24 2019-05-10 南京理工大学 一种基于实例分割神经网络框架的车道线检测算法
CN110148135A (zh) * 2019-04-03 2019-08-20 深兰科技(上海)有限公司 一种路面分割方法、装置、设备及介质
CN111161090A (zh) * 2019-11-26 2020-05-15 北京海益同展信息科技有限公司 圈养栏信息的确定方法、装置及系统、存储介质

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114189627A (zh) * 2021-11-24 2022-03-15 河南牧原智能科技有限公司 用于获取相机预置角度、监测养殖栏的方法及产品

Also Published As

Publication number Publication date
CN111161090B (zh) 2022-12-27
CN111161090A (zh) 2020-05-15

Similar Documents

Publication Publication Date Title
WO2021104124A1 (zh) 圈养栏信息的确定方法、装置及系统、存储介质
WO2021104125A1 (zh) 禽蛋异常的识别方法、装置及系统、存储介质、电子装置
CN112861575A (zh) 一种行人结构化方法、装置、设备和存储介质
US20220084304A1 (en) Method and electronic device for image processing
CN106991364B (zh) 人脸识别处理方法、装置以及移动终端
CN113128368B (zh) 一种人物交互关系的检测方法、装置及系统
US20210201501A1 (en) Motion-based object detection method, object detection apparatus and electronic device
WO2023173646A1 (zh) 一种表情识别方法及装置
CN113011403B (zh) 手势识别方法、系统、介质及设备
CN114898342A (zh) 行驶中非机动车驾驶员接打电话的检测方法
WO2019128735A1 (zh) 图像处理方法及装置
CN110298239B (zh) 目标监控方法、装置、计算机设备及存储介质
Zhou et al. Deep images enhancement for turbid underwater images based on unsupervised learning
CN111325078A (zh) 一种人脸识别方法、装置及存储介质
CN116364064B (zh) 一种音频拼接方法、电子设备及存储介质
CN112235598A (zh) 一种视频结构化处理方法、装置及终端设备
CN110163489B (zh) 一种戒毒运动锻炼成效评价方法
CN116740643A (zh) 一种基于视觉图像的鸟类识别系统及方法
CN113762231B (zh) 端对端的多行人姿态跟踪方法、装置及电子设备
US20220122341A1 (en) Target detection method and apparatus, electronic device, and computer storage medium
CN105830437A (zh) 一种监控系统中背景识别的方法及系统
Xingshi et al. Light-weight recognition network for dairy cows based on the fusion of YOLOv5s and channel pruning algorithm.
CN116546304A (zh) 一种参数配置方法、装置、设备、存储介质及产品
CN109716770A (zh) 基于语义相关性的图像压缩
CN113537359A (zh) 训练数据的生成方法及装置、计算机可读介质和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20894634

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20894634

Country of ref document: EP

Kind code of ref document: A1