CN114332925A - Method, system and device for detecting pets in elevator and computer readable storage medium - Google Patents

Method, system and device for detecting pets in elevator and computer readable storage medium Download PDF

Info

Publication number
CN114332925A
CN114332925A CN202111566168.5A CN202111566168A CN114332925A CN 114332925 A CN114332925 A CN 114332925A CN 202111566168 A CN202111566168 A CN 202111566168A CN 114332925 A CN114332925 A CN 114332925A
Authority
CN
China
Prior art keywords
pet
image
elevator
frame
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111566168.5A
Other languages
Chinese (zh)
Inventor
钟晨初
李成文
林晓坤
董晓楠
李学锋
田文龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Huichuan Control Technology Co Ltd
Original Assignee
Suzhou Huichuan Control Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Huichuan Control Technology Co Ltd filed Critical Suzhou Huichuan Control Technology Co Ltd
Priority to CN202111566168.5A priority Critical patent/CN114332925A/en
Publication of CN114332925A publication Critical patent/CN114332925A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a method, a system and a device for detecting pets in an elevator and a computer readable storage medium, wherein the method for detecting the pets in the elevator comprises the following steps: dynamically acquiring a single-frame image in an elevator; performing image segmentation on the single-frame image through a preset image preprocessing module, and performing image adjustment on the image after image segmentation to obtain a preprocessed image; identifying the preprocessed images through a plurality of preset image identification modules to obtain identification results, and summing the identification results to obtain pet category identification results; and sending the pet category identification result to a management terminal associated with the elevator so as to enable a manager to take countermeasures. By implementing the invention, the original image is avoided being calculated, the calculation amount is greatly reduced, the characteristics are ensured, the accuracy of the identified pet category is greatly ensured, and meanwhile, the calculation efficiency can be greatly improved.

Description

Method, system and device for detecting pets in elevator and computer readable storage medium
Technical Field
The invention relates to the field of elevator safety, in particular to a method, a system and a device for detecting pets in an elevator and a computer readable storage medium.
Background
In recent years, with the increase of high-rise buildings, people have stronger dependence on elevators. Bad behaviors occur frequently in the car type elevator nowadays, and some unsafe behaviors easily cause elevator faults and even cause accidents. This includes elevator accidents caused by improper pet management by the resident, and therefore, some supervision and restriction on the pet carried by the resident into the elevator is required.
At present, monitoring cameras are installed in most elevator cars, but a traditional video monitoring system needs to artificially monitor monitoring videos in real time, generally depends on manual supervision of property or community managers, but due to negligence of personnel or insufficient personnel, the situations of unfavorable supervision and incomplete supervision exist, the supervision and execution in place situations are poor, and misjudgment are easy to occur.
Disclosure of Invention
The invention mainly aims to provide a method, a system and a device for detecting pets in an elevator and a computer readable storage medium, aiming at solving the technical problem of accurately judging the category and the state of the pets in the elevator so as to avoid the safety threat of the pets to the passengers.
In order to achieve the purpose, the invention provides an elevator pet detection method, which comprises the following steps:
dynamically acquiring a single-frame image in an elevator;
performing image segmentation on the single-frame image through a preset image preprocessing module, and performing image adjustment on the image after image segmentation to obtain a preprocessed image;
identifying the preprocessed images through a plurality of preset image identification modules to obtain identification results, and summing the identification results to obtain pet category identification results;
and sending the pet category identification result to a management terminal associated with the elevator so as to enable a manager to take countermeasures.
Optionally, the step of performing image segmentation on the single frame image through a preset image preprocessing module includes:
generating a pet area capturing frame in the single-frame image through a preset image preprocessing module;
and carrying out image segmentation on the single-frame image through the pet area grabbing frame to obtain a pet area image, and endowing a pet mark to a pet in the pet area image.
Optionally, the step of performing image adjustment on the image after image segmentation to obtain a preprocessed image includes:
and adjusting the pixel size of the pet area image to a preset size through a preset image preprocessing module to obtain a preprocessed image.
Optionally, the step of recognizing the preprocessed image through a plurality of preset image recognition modules includes:
and respectively identifying the pet categories in the preprocessed image through a plurality of preset image identification modules to obtain the respective identified pet categories and confidence degrees, and storing the pet categories and the confidence degrees in association with the corresponding pet identifications.
Optionally, the step of summing the recognition results to obtain the pet category recognition result includes:
summing the pet categories and the confidence degrees of the pets identified by the two image recognition modules in the same time period to obtain a summation result;
and obtaining the pet category identification result according to the summation result.
Optionally, the step of obtaining the pet category identification result according to the summation result comprises:
and selecting the pet category with the highest confidence as the pet category identification result of the pet with the same pet identification according to the summation result.
Optionally, the step of dynamically acquiring a single frame image in the elevator is preceded by:
calling a preset camera to collect video in an elevator, and converting the video in the elevator into a single-frame image;
and calling the preset camera to upload the single-frame images to a local database frame by frame for storage.
In addition, in order to achieve the above object, the present invention further provides an in-elevator pet detection system, which includes a cloud server, a cloud database, a first edge computing device, a second edge computing device, a first local database, a second local database, a first camera, a second camera, and a display terminal;
the cloud server is respectively in network connection with the cloud database, the first edge computing device and the second edge computing device;
the first edge computing device is respectively in network connection with the first local database, the first camera and the second camera;
the second edge computing device is respectively connected with the second local database and the display terminal through a network;
the in-elevator pet detection system comprises an in-elevator pet detection program, and the in-elevator pet detection program realizes the steps of the in-elevator pet detection method when being executed by a processor.
In addition, in order to achieve the above object, the present invention also provides an in-elevator pet detecting device, including: the pet detection system comprises a memory, a processor and an in-elevator pet detection program stored on the memory and capable of running on the processor, wherein the in-elevator pet detection program realizes the steps of the in-elevator pet detection method when being executed by the processor.
In addition, to achieve the above object, the present invention also provides a computer readable storage medium having an in-elevator pet detection program stored thereon, which when executed by a processor, implements the steps of the in-elevator pet detection method as described above.
The invention provides a method, a system, a device and a computer readable storage medium for detecting pets in an elevator, wherein in the method for detecting the pets in the elevator, ROI (region of interest) images of a monitoring video are segmented by a convolutional neural network, invalid image information is eliminated, the calculation of an original image is avoided, the calculated amount is greatly reduced, the characteristics are ensured, then, the images are adjusted to facilitate the characteristic extraction, and finally, two parallel deep learning networks are used for identifying and summing confidence degrees of the pets, so that the accuracy of the identified pet types is greatly ensured, and meanwhile, edge calculation is adopted for preprocessing, so that the calculation efficiency can be greatly improved, and the aim of accurately identifying the pet types is fulfilled. Meanwhile, the in-elevator pet detection system provided by the invention can further improve the calculation efficiency through a mode of combining the edge calculation node and the cloud server.
Drawings
Fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a pet detection method in an elevator according to a first embodiment of the present invention;
fig. 3 is a schematic diagram of the framework of the pet detection system in the elevator of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The main solution of the embodiment of the invention is as follows: a method for detecting pets in an elevator comprises the following steps:
dynamically acquiring a single-frame image in an elevator;
performing image segmentation on the single-frame image through a preset image preprocessing module, and performing image adjustment on the image after image segmentation to obtain a preprocessed image;
identifying the preprocessed images through a plurality of preset image identification modules to obtain identification results, and summing the identification results to obtain pet category identification results;
and sending the pet category identification result to a management terminal associated with the elevator so as to enable a manager to take countermeasures.
In recent years, with the increase of high-rise buildings, people have stronger dependence on elevators. Bad behaviors occur frequently in the car type elevator nowadays, and some unsafe behaviors easily cause elevator faults and even cause accidents. This includes elevator accidents caused by improper pet management by the resident, and therefore, some supervision and restriction on the pet carried by the resident into the elevator is required.
At present, monitoring cameras are installed in most elevator cars, but a traditional video monitoring system needs to artificially monitor monitoring videos in real time, generally depends on manual supervision of property or community managers, but due to negligence of personnel or insufficient personnel, the situations of unfavorable supervision and incomplete supervision exist, the supervision and execution in place situations are poor, and misjudgment are easy to occur.
The invention provides a method for detecting pets in an elevator, wherein in the method for detecting the pets in the elevator, ROI images of a monitoring video are segmented through a convolutional neural network, invalid image information is eliminated, the calculation of an original image is avoided, the calculation amount is greatly reduced, the characteristics are ensured, then the images are adjusted, the characteristic extraction is convenient, finally, two parallel deep learning networks are used for identifying the pets and summing confidence degrees, the accuracy of the identified pet types is greatly ensured, meanwhile, edge calculation is adopted for preprocessing, the calculation efficiency can be greatly improved, and the purpose of accurately identifying the pet types is achieved.
As shown in fig. 1, fig. 1 is a schematic terminal structure diagram of a hardware operating environment according to an embodiment of the present invention.
The terminal of the embodiment of the invention can be a PC, and can also be intelligent terminal equipment such as a smart phone, a tablet computer, a portable computer and the like.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein a communication bus 1002 is used to enable connective communication between these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 1005 may alternatively be a storage device separate from the processor 1001.
Optionally, the terminal may further include a camera, a Radio Frequency (RF) circuit, a sensor, an audio circuit, a WiFi module, and the like. Such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that may turn off the display screen and/or the backlight when the mobile terminal is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), detect the magnitude and direction of gravity when the mobile terminal is stationary, and can be used for applications (such as horizontal and vertical screen switching, related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer and tapping) and the like for recognizing the attitude of the mobile terminal; of course, the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which are not described herein again.
Those skilled in the art will appreciate that the terminal structure shown in fig. 1 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is one type of computer storage medium, may include an operating system, a network communication module, a user interface module, and an in-elevator pet detection program.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting to a backend server and performing data communication with the backend server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and processor 1001 may be configured to invoke the in-elevator pet detection program stored in memory 1005 and perform the following operations:
dynamically acquiring a single-frame image in an elevator;
performing image segmentation on the single-frame image through a preset image preprocessing module, and performing image adjustment on the image after image segmentation to obtain a preprocessed image;
identifying the preprocessed images through a plurality of preset image identification modules to obtain identification results, and summing the identification results to obtain pet category identification results;
and sending the pet category identification result to a management terminal associated with the elevator so as to enable a manager to take countermeasures.
Further, the processor 1001 may invoke the in-elevator pet detection program stored in the memory 1005, and also perform the following operations:
the step of performing image segmentation on the single-frame image through a preset image preprocessing module comprises the following steps of:
generating a pet area capturing frame in the single-frame image through a preset image preprocessing module;
and carrying out image segmentation on the single-frame image through the pet area grabbing frame to obtain a pet area image, and endowing a pet mark to a pet in the pet area image.
Further, the processor 1001 may invoke the in-elevator pet detection program stored in the memory 1005, and also perform the following operations:
the step of adjusting the image after the image segmentation to obtain the preprocessed image comprises:
and adjusting the pixel size of the pet area image to a preset size through a preset image preprocessing module to obtain a preprocessed image.
Further, the processor 1001 may invoke the in-elevator pet detection program stored in the memory 1005, and also perform the following operations:
the step of recognizing the preprocessed image by a plurality of preset image recognition modules comprises:
and respectively identifying the pet categories in the preprocessed image through a plurality of preset image identification modules to obtain the respective identified pet categories and confidence degrees, and storing the pet categories and the confidence degrees in association with the corresponding pet identifications.
Further, the processor 1001 may invoke the in-elevator pet detection program stored in the memory 1005, and also perform the following operations:
the step of summing the recognition results to obtain the pet category recognition result comprises the following steps:
summing the pet categories and the confidence degrees of the pets identified by the two image recognition modules in the same time period to obtain a summation result;
and obtaining the pet category identification result according to the summation result.
Further, the processor 1001 may invoke the in-elevator pet detection program stored in the memory 1005, and also perform the following operations:
the step of obtaining the pet category identification result according to the summation result comprises the following steps:
and selecting the pet category with the highest confidence as the pet category identification result of the pet with the same pet identification according to the summation result.
Further, the processor 1001 may invoke the in-elevator pet detection program stored in the memory 1005, and also perform the following operations:
the step of dynamically acquiring a single frame image within an elevator previously comprises:
calling a preset camera to collect video in an elevator, and converting the video in the elevator into a single-frame image;
and calling the preset camera to upload the single-frame images to a local database frame by frame for storage.
Referring to fig. 2, a first embodiment of the present invention provides an in-elevator pet detection method, including:
step S10, dynamically acquiring a single frame image in the elevator;
in this embodiment, step S10 includes:
calling a preset camera to collect video in an elevator, and converting the video in the elevator into a single-frame image;
and calling the preset camera to upload the single-frame images to a local database frame by frame for storage.
It should be noted that, in this embodiment, the execution main body is an in-elevator pet detection system, and when a cloud server in the system receives an access request of an elevator camera (that is, the preset camera) initiated by a user through an edge computing device connected to a display terminal, an access instruction is sent to the edge computing device connected to the elevator camera, so that the edge computing device connected to the elevator camera obtains video data (that is, the in-elevator video) acquired by the elevator camera in real time, converts the video data into a single-frame image for subsequent image processing, and stores the single-frame image in a local database for backup.
The edge computing device can be an intelligent terminal device such as a PC, a smart phone, a tablet computer and a portable computer, which can perform data processing and support communication network connection, and the display terminal can be a display device such as a display screen and a projector.
Step S20, performing image segmentation on the single-frame image through a preset image preprocessing module, and performing image adjustment on the image after image segmentation to obtain a preprocessed image;
it should be noted that the preset image preprocessing module uses a deep learning network, such as a convolutional neural network, and further, for example, a U-net algorithm, an algorithm for performing semantic segmentation using a full convolutional network.
In this embodiment, step S20 includes:
generating a pet area capturing frame in the single-frame image through a preset image preprocessing module;
carrying out image segmentation on the single-frame image through the pet area grabbing frame to obtain a pet area image, and endowing a pet mark to a pet in the pet area image;
and adjusting the pixel size of the pet area image to a preset size through a preset image preprocessing module to obtain a preprocessed image.
In a specific implementation, the edge computing device generates a pet ROI (region of interest) (i.e., the pet region capture frame) in a single frame image by using the deep learning network, then segments the single frame image by using the pet ROI to obtain a smaller image (i.e., the pet region image) including a pet, and assigns an ID (i.e., the pet identifier) to the pet in the smaller image, and then the edge computing device uniformly adjusts (Resize) the pixel size of the segmented image to 224x224 size by using the deep learning network again, where the pixel size may also be set to other sizes, such as 112x112, 336x336, and the like, and may be appropriately adjusted according to an algorithm and actual requirements.
Step S30, recognizing the preprocessed image through a plurality of preset image recognition modules to obtain recognition results, and summing the recognition results to obtain a pet category recognition result;
it should be noted that, in this embodiment, two parallel deep learning networks are taken as an example for description, the preset image recognition module uses a deep learning network, but is different from step S20, where the two parallel deep learning networks are a VGG network and a ResNet-50 network, respectively, and the purpose is to recognize and calculate the same object by using different algorithms, and perform comprehensive judgment according to results obtained by the different algorithms, which is more accurate and reliable than a result obtained based on a single algorithm.
In this embodiment, step S30 includes:
respectively identifying the pet categories in the preprocessed image through a plurality of preset image identification modules to obtain the respective identified pet categories and confidence degrees, and storing the pet categories and the confidence degrees in association with the corresponding pet identifications;
summing the pet categories and the confidence degrees of the pets identified by the two image recognition modules in the same time period to obtain a summation result;
and selecting the pet category with the highest confidence as the pet category identification result of the pet with the same pet identification according to the summation result.
In the specific implementation, the edge computing device identifies the categories of the pets by using two parallel deep learning networks to obtain the categories and confidence degrees of the pets which are respectively identified; and then summing the confidence degrees of the pet categories corresponding to the pets with the same ID in a period of time, and selecting the pet category with the highest confidence degree as the category of the ID pet. And judging by using a multi-frame voting method according to the obtained result: for example, in 3 consecutive frames, the same ID is identified as the same pet by two classification models, and if the same ID is detected as a different animal, the type with high confidence is defined as the standard. For example, for a pet with the same ID in 3 consecutive frames, the classification model a detects that the recognition result is "hassk, 90%" "hassk, 85%" hassk, 92% ", and the classification model B detects that the recognition result is" alaska, 80% "" alaska, 82% "alaska, 86%", and then the recognition result is identified as "hassk.
In addition, the steps of performing algorithm recognition and summation through a plurality of preset image recognition modules in step S30 may also be implemented on a cloud server of the system.
And step S40, sending the pet category identification result to a management terminal associated with the elevator so as to enable a manager to take countermeasures.
It should be noted that the management terminal includes an edge computing device and a display device connected thereto.
In the concrete implementation, the system identifies and sums the elevator pet categories to obtain the elevator pet categories, and sends the elevator pet categories to property personnel through the property computer in time, so that the property personnel can take corresponding measures in time.
In the method for detecting the pet in the elevator, the ROI image of the monitoring video is segmented through a convolutional neural network, invalid image information is eliminated, the calculation of an original image is avoided, the calculation amount is greatly reduced, the features are guaranteed, then the image is adjusted, the feature extraction is convenient, finally, the two parallel deep learning networks are used for recognizing the pet and summing confidence coefficients, the accuracy of the recognized pet category is greatly guaranteed, meanwhile, the edge calculation is adopted for preprocessing, the calculation efficiency can be greatly improved, and the purpose of accurately recognizing the pet type is achieved.
Further, the invention also provides an in-elevator pet detection system, which comprises a cloud server, a cloud database, a first edge calculation device, a second edge calculation device, a first local database, a second local database, a first camera, a second camera and a display terminal;
the cloud server is respectively in network connection with the cloud database, the first edge computing device and the second edge computing device;
the first edge computing device is respectively in network connection with the first local database, the first camera and the second camera;
the second edge computing device is respectively connected with the second local database and the display terminal through a network;
the in-elevator pet detection system comprises an in-elevator pet detection program, and the in-elevator pet detection program realizes the steps of the in-elevator pet detection method in the embodiment when being executed by a processor.
Referring to fig. 3, the cloud server corresponds to a reference numeral 1, the cloud database corresponds to a reference numeral 2, the first edge computing device corresponds to a reference numeral 3, the second edge computing device corresponds to a reference numeral 7, the first local database corresponds to a reference numeral 4, the second local database corresponds to a reference numeral 8, the first camera corresponds to a reference numeral 5, the second camera corresponds to a reference numeral 6, and the display terminal corresponds to a reference numeral 9.
When a user remotely accesses the cloud server 1 through a network of the second edge computing device 7 connected with the display terminal 9 and then accesses the state of the first camera 5, the cloud server 1 broadcasts an access instruction to the first edge computing device 3, the first edge computing device 3 matches the access instruction, and after an external interface or a network module of the first edge computing device 3 receives the access instruction, data collection is performed through the first camera 5 and the second camera 6 corresponding to the first edge computing device 3.
The first camera 5 and the second camera 6 transmit data acquired in real time to an external interface or a network module of the first edge computing device 3 connected with the first camera, and the external interface or the network module uploads the received data to a memory of the corresponding first edge computing device 3; then, a computing unit of the first edge computing device 3 preprocesses data by using an image segmentation and image compression method, a computing result is uploaded to the cloud server 1 through a network module of the first edge computing device 3, the cloud server 1 identifies pets by using two deep learning networks, classifies result votes according to the confidence degrees of identification, stores the result in the cloud database 2, stores the computing result in the first local database 4 through a peripheral interface of the first edge computing device 3, stores the computing result in the second local database 8 through a peripheral interface of the second edge computing device 7, and displays the result to a manager through the display terminal 9.
The edge computing device can be an intelligent terminal device such as a PC, a smart phone, a tablet computer and a portable computer, which can perform data processing and support communication network connection, and the display terminal can be a display device such as a display screen and a projector.
The system can further improve the calculation efficiency of pet categories through a mode of combining the edge calculation nodes and the cloud server.
Furthermore, an embodiment of the present invention further provides a computer-readable storage medium, where an in-elevator pet detection program is stored on the computer-readable storage medium, and when executed by a processor, the in-elevator pet detection program implements the following operations:
dynamically acquiring a single-frame image in an elevator;
performing image segmentation on the single-frame image through a preset image preprocessing module, and performing image adjustment on the image after image segmentation to obtain a preprocessed image;
identifying the preprocessed images through a plurality of preset image identification modules to obtain identification results, and summing the identification results to obtain pet category identification results;
and sending the pet category identification result to a management terminal associated with the elevator so as to enable a manager to take countermeasures.
Further, the in-elevator pet detection program when executed by the processor further performs the following operations:
the step of performing image segmentation on the single-frame image through a preset image preprocessing module comprises the following steps of:
generating a pet area capturing frame in the single-frame image through a preset image preprocessing module;
and carrying out image segmentation on the single-frame image through the pet area grabbing frame to obtain a pet area image, and endowing a pet mark to a pet in the pet area image.
Further, the in-elevator pet detection program when executed by the processor further performs the following operations:
the step of adjusting the image after the image segmentation to obtain the preprocessed image comprises:
and adjusting the pixel size of the pet area image to a preset size through a preset image preprocessing module to obtain a preprocessed image.
Further, the in-elevator pet detection program when executed by the processor further performs the following operations:
the step of recognizing the preprocessed image by a plurality of preset image recognition modules comprises:
and respectively identifying the pet categories in the preprocessed image through a plurality of preset image identification modules to obtain the respective identified pet categories and confidence degrees, and storing the pet categories and the confidence degrees in association with the corresponding pet identifications.
Further, the in-elevator pet detection program when executed by the processor further performs the following operations:
the step of summing the recognition results to obtain the pet category recognition result comprises the following steps:
summing the pet categories and the confidence degrees of the pets identified by the two image recognition modules in the same time period to obtain a summation result;
and obtaining the pet category identification result according to the summation result.
Further, the in-elevator pet detection program when executed by the processor further performs the following operations:
the step of obtaining the pet category identification result according to the summation result comprises the following steps:
and selecting the pet category with the highest confidence as the pet category identification result of the pet with the same pet identification according to the summation result.
Further, the in-elevator pet detection program when executed by the processor further performs the following operations:
the step of dynamically acquiring a single frame image within an elevator previously comprises:
calling a preset camera to collect video in an elevator, and converting the video in the elevator into a single-frame image;
and calling the preset camera to upload the single-frame images to a local database frame by frame for storage.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for enabling a terminal device (e.g., a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. The method for detecting the pets in the elevator is characterized by comprising the following steps:
dynamically acquiring a single-frame image in an elevator;
performing image segmentation on the single-frame image through a preset image preprocessing module, and performing image adjustment on the image after image segmentation to obtain a preprocessed image;
identifying the preprocessed images through a plurality of preset image identification modules to obtain identification results, and summing the identification results to obtain pet category identification results;
and sending the pet category identification result to a management terminal associated with the elevator so as to enable a manager to take countermeasures.
2. The method for detecting pets in elevator according to claim 1, wherein said step of image-segmenting said single frame image by a preset image preprocessing module comprises:
generating a pet area capturing frame in the single-frame image through a preset image preprocessing module;
and carrying out image segmentation on the single-frame image through the pet area grabbing frame to obtain a pet area image, and endowing a pet mark to a pet in the pet area image.
3. The method for detecting pets in an elevator according to claim 2, wherein the step of performing image adjustment on the image after image segmentation to obtain a preprocessed image comprises:
and adjusting the pixel size of the pet area image to a preset size through a preset image preprocessing module to obtain a preprocessed image.
4. The method for detecting pets in an elevator according to claim 3, wherein said step of recognizing said preprocessed images by a plurality of preset image recognition modules comprises:
and respectively identifying the pet categories in the preprocessed image through a plurality of preset image identification modules to obtain the respective identified pet categories and confidence degrees, and storing the pet categories and the confidence degrees in association with the corresponding pet identifications.
5. The method of detecting pets in an elevator according to claim 4, wherein said step of summing said identification results to obtain a pet category identification result comprises:
summing the pet categories and the confidence degrees of the pets identified by the same pet by the image recognition modules in the same time period to obtain a summation result;
and obtaining the pet category identification result according to the summation result.
6. The method for detecting pets in elevator according to claim 5, wherein the step of obtaining the result of identifying the pet category according to the summation result comprises:
and selecting the pet category with the highest confidence as the pet category identification result of the pet with the same pet identification according to the summation result.
7. The method of claim 6, wherein the step of dynamically acquiring a single frame image within the elevator comprises:
calling a preset camera to collect video in an elevator, and converting the video in the elevator into a single-frame image;
and calling the preset camera to upload the single-frame images to a local database frame by frame for storage.
8. The system for detecting the pets in the elevator is characterized by comprising a cloud server, a cloud database, a first edge computing device, a second edge computing device, a first local database, a second local database, a first camera, a second camera and a display terminal;
the cloud server is respectively in network connection with the cloud database, the first edge computing device and the second edge computing device;
the first edge computing device is respectively in network connection with the first local database, the first camera and the second camera;
the second edge computing device is respectively connected with the second local database and the display terminal through a network;
the in-elevator pet detection system comprises an in-elevator pet detection program, and the in-elevator pet detection program realizes the steps of the in-elevator pet detection method according to any one of claims 1 to 7 when being executed by a processor.
9. An in-elevator pet detection device, characterized in that, the in-elevator pet detection device includes: a memory, a processor, and an in-elevator pet detection program stored on the memory and executable on the processor, the in-elevator pet detection program when executed by the processor implementing the steps of the in-elevator pet detection method of any one of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has an in-elevator pet detection program stored thereon, which when executed by a processor, implements the steps of the in-elevator pet detection method according to any one of claims 1 to 7.
CN202111566168.5A 2021-12-20 2021-12-20 Method, system and device for detecting pets in elevator and computer readable storage medium Pending CN114332925A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111566168.5A CN114332925A (en) 2021-12-20 2021-12-20 Method, system and device for detecting pets in elevator and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111566168.5A CN114332925A (en) 2021-12-20 2021-12-20 Method, system and device for detecting pets in elevator and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114332925A true CN114332925A (en) 2022-04-12

Family

ID=81055237

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111566168.5A Pending CN114332925A (en) 2021-12-20 2021-12-20 Method, system and device for detecting pets in elevator and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114332925A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116439155A (en) * 2023-06-08 2023-07-18 北京积加科技有限公司 Pet accompanying method and device
CN116863409A (en) * 2023-09-05 2023-10-10 苏州德菱邑铖精工机械股份有限公司 Intelligent elevator safety management method and system based on cloud platform

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116439155A (en) * 2023-06-08 2023-07-18 北京积加科技有限公司 Pet accompanying method and device
CN116439155B (en) * 2023-06-08 2024-01-02 北京积加科技有限公司 Pet accompanying method and device
CN116863409A (en) * 2023-09-05 2023-10-10 苏州德菱邑铖精工机械股份有限公司 Intelligent elevator safety management method and system based on cloud platform
CN116863409B (en) * 2023-09-05 2023-11-07 苏州德菱邑铖精工机械股份有限公司 Intelligent elevator safety management method and system based on cloud platform

Similar Documents

Publication Publication Date Title
US8792722B2 (en) Hand gesture detection
CN109484935B (en) Elevator car monitoring method, device and system
US8750573B2 (en) Hand gesture detection
CN114332925A (en) Method, system and device for detecting pets in elevator and computer readable storage medium
CN112100431B (en) Evaluation method, device and equipment of OCR system and readable storage medium
CN109544870B (en) Alarm judgment method for intelligent monitoring system and intelligent monitoring system
US11164028B2 (en) License plate detection system
CN108090908B (en) Image segmentation method, device, terminal and storage medium
US11277591B2 (en) Surveillance system, surveillance network construction method, and program
CN111191507A (en) Safety early warning analysis method and system for smart community
JP6437217B2 (en) Image output device, image management system, image processing method, and program
CN112329504A (en) Door opening and closing state monitoring method, device, equipment and computer readable storage medium
KR101360999B1 (en) Real time data providing method and system based on augmented reality and portable terminal using the same
CN114187561A (en) Abnormal behavior identification method and device, terminal equipment and storage medium
EP3929804A1 (en) Method and device for identifying face, computer program, and computer-readable storage medium
US10535154B2 (en) System, method, and program for image analysis
KR101236266B1 (en) Manage system of parking space information
CN108881846B (en) Information fusion method and device and computer readable storage medium
CN114360055A (en) Behavior detection method, device and storage medium based on artificial intelligence
CN107749942A (en) Suspension image pickup method, mobile terminal and computer-readable recording medium
US10931923B2 (en) Surveillance system, surveillance network construction method, and program
CN113378836A (en) Image recognition method, apparatus, device, medium, and program product
CN112733722A (en) Gesture recognition method, device and system and computer readable storage medium
TW202046169A (en) Electronic device and face recognition method
JP2016021716A (en) Tracking device and control method of the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination