CN115103110A - Household intelligent monitoring method based on edge calculation - Google Patents

Household intelligent monitoring method based on edge calculation Download PDF

Info

Publication number
CN115103110A
CN115103110A CN202210656869.6A CN202210656869A CN115103110A CN 115103110 A CN115103110 A CN 115103110A CN 202210656869 A CN202210656869 A CN 202210656869A CN 115103110 A CN115103110 A CN 115103110A
Authority
CN
China
Prior art keywords
target object
main
camera
edge computing
main camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210656869.6A
Other languages
Chinese (zh)
Other versions
CN115103110B (en
Inventor
张腾怀
王丹星
兰雨晴
余丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Standard Intelligent Security Technology Co Ltd
Original Assignee
China Standard Intelligent Security Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Standard Intelligent Security Technology Co Ltd filed Critical China Standard Intelligent Security Technology Co Ltd
Priority to CN202210656869.6A priority Critical patent/CN115103110B/en
Publication of CN115103110A publication Critical patent/CN115103110A/en
Application granted granted Critical
Publication of CN115103110B publication Critical patent/CN115103110B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Landscapes

  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention provides a home intelligent monitoring method based on edge computing, which is characterized in that all cameras in a home are connected to the same edge computing terminal, whether a target object enters a monitoring picture of the cameras is determined, the cameras for shooting the target object are set as main camera equipment, the main camera equipment is instructed to carry out tracking shooting on the target object, the action state of the target object and the existence state of the target object in the monitoring picture of the main camera equipment are determined, whether a safety accident event happens currently is judged, an alarm operation is carried out through the edge computing terminal, the cameras corresponding to the main camera equipment are switched and changed according to the existence state of the target object in the monitoring picture of the main camera equipment, therefore, a user can continuously monitor the target object through a mobile terminal without manual switching operation, and the state information of the target object is obtained in time, the comprehensiveness, timeliness and reliability of monitoring inside the household residence are improved.

Description

Household intelligent monitoring method based on edge calculation
Technical Field
The invention relates to the technical field of home monitoring, in particular to a home intelligent monitoring method based on edge computing.
Background
For the inside real time monitoring of whole scope and all-weather to the family house, can install the camera in the inside different regions of family house usually, through being connected mobile terminal such as smart mobile phone with the camera, the user can in time watch the inside environment image of family house through mobile terminal. However, a user can only view an image shot by one camera through the mobile terminal at the same time, and if the user needs to have a specific target object inside the home, the connected camera needs to be switched continuously, which is inconvenient for continuously tracking and monitoring the movable target object, and cannot obtain the state information of the target object in time, thereby reducing the comprehensiveness, timeliness and reliability of monitoring inside the home.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a home intelligent monitoring method based on edge computing, which comprises the steps of connecting all cameras in a home to the same edge computing terminal, determining whether a target object enters a monitoring picture of the cameras, setting the cameras for shooting the target object as main camera equipment, and instructing the main camera equipment to carry out tracking shooting on the target object to obtain a moving image of the target object; the method comprises the steps of analyzing and processing moving images of a target object, determining the action state of the target object and the existence state of the target object in a monitoring picture of a main camera device, judging whether a safety accident event occurs at present, carrying out alarm operation through an edge computing terminal, switching and changing a camera corresponding to the main camera device according to the existence state of the target object in the monitoring picture of the main camera device, and realizing continuous monitoring on the activity states of the target object in different areas in a home residence.
The invention provides a home intelligent monitoring method based on edge computing, which comprises the following steps:
step S1, all cameras installed in the home are connected to the same edge computing terminal, and the edge computing terminal is instructed to perform camera shooting state initialization operation on all the cameras; and indicating all cameras to enter a monitoring camera working mode through the edge computing terminal;
step S2, analyzing the images shot by each camera, determining whether the target object enters into the monitoring picture of the camera, and setting one of the cameras shot to obtain the target object as the main camera equipment; instructing the main camera equipment to perform tracking shooting on a target object to obtain a target object moving image;
step S3, analyzing the moving image of the target object, and determining an action state of the target object and an existence state of the target object on the monitoring screen of the primary camera device; judging whether a safety accident event occurs at present according to the action state of the target object, and carrying out alarm operation through the edge computing terminal;
and step S4, determining whether a camera corresponding to the main camera device needs to be switched and changed according to the existence state of the target object in the monitoring picture of the main camera device, so as to continuously monitor the activity state of the target object in different areas in the home.
Further, in the step S1, accessing all the cameras installed in the home to the same edge computing terminal, and instructing the edge computing terminal to perform the camera shooting state initialization operation on all the cameras specifically includes:
all cameras installed in a home are connected with the same edge computing terminal in a control instruction flow mode and an image data flow mode, so that the edge computing terminal sends control instructions to each camera independently and each camera uploads monitoring image data to the edge computing terminal independently;
and indicating the edge computing terminal to send a camera shooting initialization instruction to all the cameras, so that each camera recovers to a preset shooting focal length and a preset shooting field angle state.
Further, in the step S1, instructing, by the edge computing terminal, all the cameras to enter the monitoring camera shooting operating mode specifically includes:
and indicating all the cameras to enter a monitoring scanning camera working mode through the edge computing terminal, wherein the scanning camera shooting period of the monitoring scanning camera shooting working mode of each camera is determined by the indoor space size of each camera.
Further, in step S2, analyzing the image captured by each camera to determine whether the target object enters into the monitoring screen of the camera, and setting one of the cameras capturing the target object as the primary camera specifically includes:
when all cameras enter a monitoring camera working mode, acquiring images shot by each camera, performing first identification analysis processing on each image, and screening out a primary screening image set containing a target object in an image picture; performing second recognition analysis processing on all images contained in the primary screening image set, and determining the body area value of the target object existing in the image picture of each image; and setting the camera corresponding to the image with the maximum target object body area value as the main camera equipment.
Further, in step S2, instructing the main imaging device to perform tracking shooting on the target object, and obtaining the moving image of the target object specifically includes:
indicating an infrared sensor on the main camera equipment to track and position the target object, and determining the position of the target object in the home;
and indicating the main camera equipment to carry out tracking shooting according to the position of the target object, and adjusting the shooting focal length of the main camera equipment according to the distance between the target object and the main camera equipment in the tracking shooting process.
Further, in step S2, the adjusting the shooting focal length of the main image capturing apparatus according to the distance between the target object and the main image capturing apparatus specifically includes:
performing pre-focusing processing within a time of detecting a distance between a target object and the main image pickup apparatus twice, the process being:
step S201, using the following formula (1), predicting the distance between the target object and the main camera device at the next detection according to the distance between the target object and the main camera device obtained at the current detection,
Figure BDA0003688329620000031
in the above formula (1), l (T + T) represents the distance between the target object predicted at the time T + T corresponding to the current detection and the main imaging apparatus; l (t) represents the distance between the target object at the time t corresponding to the last detection and the main imaging device; l (T-a x T) represents the distance between the target object detected at the time T-a x T and the main camera device; t represents the interval time required by the main camera equipment to shoot the target object until the distance between the target object and the main camera equipment is calculated; t represents time T, where T > nT and T% T ═ 0,% represents the remainder; n represents the corresponding sampling fitting number in the historical process of detecting the distance between the target object and the main camera equipment; a represents a preset integer variable;
step S202, using the following formula (2), pre-focusing is performed on the main image pickup apparatus based on the distance between the target object predicted at the next detection and the main image pickup apparatus and the distance between the target object obtained at the current detection and the main image pickup apparatus,
Figure BDA0003688329620000041
in the above equation (2), f (T + T) represents that the focal length of the main image pickup apparatus is pre-focused to f (T + T) at the time T + T; f (t) denotes a focal length of the main image pickup apparatus at time t; Δ f (T-a × T) represents a difference between an actual focal length value of the main image pickup apparatus after being pre-focused at time T-a × T and a desired pre-focusing focal length value of the main image pickup apparatus at time T-a × T;
step S203, before the next detection of the distance between the target object and the main camera device, the focal length of the main camera device is adjusted to f (T + T), and after the next detection of the distance between the target object and the main camera device is finished, the actual focal length value adjusted by the main camera device is obtained according to the actually detected distance between the target object and the main camera device by using the following formula (3), so as to calculate the difference value between the actual focal length value and the expected pre-focusing focal length value,
Figure BDA0003688329620000042
in the above equation (3), Δ f (T + T) represents a difference between the actual focal length value adjusted by the main image pickup apparatus at the time T + T and the desired pre-focus focal length value at the time T + T; l (T + T) represents a distance between the target object actually detected at time T + T and the main image pickup apparatus;
through the above process, the focal length of the main image pickup apparatus is adjusted to f (T + T) at time T + T to realize pre-focusing, and then the main image pickup apparatus is adjusted to an actual focal length value according to the distance L (T + T) between the target object actually detected at time T + T and the main image pickup apparatus
Figure BDA0003688329620000051
And calculating to obtain a difference value between the actual focal length value adjusted by the main camera equipment at the time T + T and the expected pre-focusing focal length value at the time T + T, so as to obtain fitting reference data for pre-focusing.
Further, in step S3, the analyzing the moving image of the target object and determining the action state of the target object and the existence state of the target object on the monitoring screen of the main imaging device specifically include:
performing limb action posture recognition analysis processing on the target object moving image to determine a limb action posture of the target object;
and carrying out target object body contour recognition analysis processing on the target object moving image, and determining the existing area value of the body of the target object in the monitoring picture of the main camera equipment.
Further, in the step S3, determining whether a safety accident event occurs at present according to the action state of the target object, so that the performing an alarm operation through the edge computing terminal specifically includes:
judging whether the target object carries out dangerous action or whether the target object falls down currently according to the limb action posture of the target object;
and when the target object is determined to be in dangerous action behavior at present or fall at present, sending an alarm message to the mobile terminal through the edge computing terminal.
Further, in step S4, determining whether a camera corresponding to the main imaging device needs to be changed or not according to the presence state of the target object in the monitoring screen of the main imaging device, so as to continuously monitor the activity state of the target object in different areas inside the home specifically includes:
comparing the existing area value of the body of the target object in the monitoring picture of the main camera equipment with a preset area threshold value, and if the existing area value is smaller than the preset area threshold value, switching and changing the camera corresponding to the main camera equipment again according to the step S2; and then, the redetermined main camera equipment is instructed to track and shoot the target object, so that the activity states of the target object in different areas in the home are continuously monitored.
Compared with the prior art, the edge-computing-based intelligent home monitoring method has the advantages that all cameras in a home are connected to the same edge computing terminal, whether a target object enters a monitoring picture of the cameras or not is determined, the cameras for shooting the target object are set as main camera equipment, the main camera equipment is instructed to carry out tracking shooting on the target object, and a moving image of the target object is obtained; the method comprises the steps of analyzing and processing a moving image of a target object, determining the action state of the target object and the existence state of the target object in a monitoring picture of a main camera device, judging whether a safety accident event occurs at present, carrying out alarm operation through an edge computing terminal, switching and changing a camera corresponding to the main camera device according to the existence state of the target object in the monitoring picture of the main camera device, and realizing continuous monitoring on the activity states of the target object in different areas in a home.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic flow chart of a home intelligent monitoring method based on edge computing according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of a home intelligent monitoring method based on edge computing according to an embodiment of the present invention. The intelligent home monitoring method based on edge computing comprises the following steps:
step S1, all cameras installed in the home are connected to the same edge computing terminal, and the edge computing terminal is instructed to perform camera shooting state initialization operation on all the cameras; and indicating all cameras to enter a monitoring camera working mode through the edge computing terminal;
step S2, analyzing the images shot by each camera, determining whether the target object enters into the monitoring picture of the camera, and setting one of the cameras shot to obtain the target object as the main camera equipment; instructing the main camera equipment to perform tracking shooting on a target object to obtain a target object moving image;
step S3, analyzing the moving image of the target object, and determining an action state of the target object and an existence state of the target object on the monitoring screen of the main camera device; judging whether a safety accident event occurs at present according to the action state of the target object, and carrying out alarm operation through the edge computing terminal;
and step S4, determining whether a camera corresponding to the main camera needs to be switched and changed according to the existence state of the target object in the monitoring picture of the main camera, so as to continuously monitor the activity state of the target object in different areas in the home.
The beneficial effects of the above technical scheme are: according to the intelligent home monitoring method based on the edge computing, all cameras in a home are connected to the same edge computing terminal, whether a target object enters a monitoring picture of the cameras or not is determined, the cameras for shooting the target object are set as main camera equipment, and the main camera equipment is instructed to track and shoot the target object to obtain a moving image of the target object; the method comprises the steps of analyzing and processing moving images of a target object, determining the action state of the target object and the existence state of the target object in a monitoring picture of a main camera device, judging whether a safety accident event occurs at present, carrying out alarm operation through an edge computing terminal, switching and changing a camera corresponding to the main camera device according to the existence state of the target object in the monitoring picture of the main camera device, and realizing continuous monitoring on the activity states of the target object in different areas in a home residence.
Preferably, in step S1, accessing all the cameras installed in the home to the same edge computing terminal, and instructing the edge computing terminal to perform the operation of initializing the camera shooting states of all the cameras specifically includes:
all cameras installed in a home are connected with the same edge computing terminal in a control instruction flow mode and an image data flow mode, so that the edge computing terminal sends control instructions to each camera independently and each camera uploads monitoring image data to the edge computing terminal independently;
and indicating the edge computing terminal to send a camera shooting initialization instruction to all the cameras, so that each camera recovers to a preset shooting focal length and a preset shooting field angle state.
The beneficial effects of the above technical scheme are: in actual work, the cameras can be respectively installed in a living room, a kitchen, a bedroom and a passageway area inside a home, and meanwhile, all the cameras are connected with the same edge computing terminal, so that the edge computing terminal can uniformly receive monitoring images shot by all the cameras and respectively carry out different monitoring shooting state adjustment on each camera, and the monitoring shooting flexibility of the whole home is improved.
Preferably, in step S1, instructing, by the edge computing terminal, all the cameras to enter the monitoring camera operation mode specifically includes:
and indicating all the cameras to enter a monitoring scanning camera shooting working mode through the edge computing terminal, wherein the scanning camera shooting period of the monitoring scanning camera shooting working mode of each camera is determined by the size of the indoor space where each camera is located.
The beneficial effects of the above technical scheme are: the edge computing terminal sends a monitoring shooting instruction to the camera, at the moment, the corresponding camera enters a monitoring scanning camera working mode to scan and shoot an area where the camera is located, and the scanning camera shooting period of the monitoring scanning camera shooting working mode of each camera is determined by the indoor space size where each camera is located, for example, when the indoor space where the camera is located is larger, the scanning camera shooting period corresponding to the camera is longer, namely, the camera can scan and shoot the indoor space where the camera is located at a lower scanning speed, and therefore the comprehensiveness of indoor space shooting is improved.
Preferably, in step S2, analyzing the image captured by each camera to determine whether the target object enters into the monitoring screen of the camera, and setting one of the cameras capturing the target object as the primary camera specifically includes:
when all cameras enter a monitoring camera working mode, acquiring images shot by each camera, performing first identification analysis processing on each image, and screening out a primary screening image set containing a target object in an image picture; performing second recognition analysis processing on all images contained in the primary screening image set, and determining the body area value of the target object existing in the image picture of each image; and setting the camera corresponding to the image with the maximum target object body area value as the main camera equipment.
The beneficial effects of the above technical scheme are: when all cameras enter a monitoring camera working mode, images which correspond to the target object and have the largest body area value of the target object in an image picture can be screened out by performing the two-time identification analysis processing on the images shot by the cameras, the images show that the target object moves in a space area corresponding to the image picture, the corresponding cameras are set as main camera equipment at the moment, a user can conveniently perform real-time monitoring shooting on the target object through the main camera equipment, and the user does not need to repeatedly switch to shooting pictures of other cameras to find the target object.
Preferably, in step S2, instructing the primary imaging device to perform tracking shooting on the target object, and obtaining a moving image of the target object specifically includes:
indicating an infrared sensor on the main camera equipment to track and position the target object, and determining the position of the target object in the home;
and indicating the main camera equipment to carry out tracking shooting according to the position of the target object, and adjusting the shooting focal length of the main camera equipment according to the distance between the target object and the main camera equipment in the tracking shooting process.
The beneficial effects of the above technical scheme are: the method comprises the steps that a control instruction is sent to a main camera device through an edge computing terminal, an infrared sensor on the main camera device is indicated to track and position a target object, the position of the target object in a home is determined, the main camera device can use the determined position as a reference to carry out zoom type tracking shooting on the target object, when the distance between the target object and the main camera device is far, a long-focus shooting state is entered, when the distance between the target object and the main camera device is near, a wide-angle shooting state is entered, and therefore the target object is guaranteed to be located in a shooting view field range of a camera all the time.
Preferably, in step S2, the adjusting the shooting focal length of the main image pickup apparatus according to the distance between the target object and the main image pickup apparatus specifically includes:
performing pre-focusing processing within a time of detecting a distance between a target object and the main image pickup apparatus twice, the process being:
step S201, using the following formula (1), predicting the distance between the target object and the main camera device at the next detection according to the distance between the target object and the main camera device obtained at the current detection,
Figure BDA0003688329620000101
in the above equation (1), l (T + T) represents the distance between the target object predicted at the time T + T corresponding to the present detection and the main image pickup apparatus; l (t) represents the distance between the target object at the time t corresponding to the last detection and the main imaging device; l (T-a x T) represents the distance between the target object detected at the time of T-a x T and the main camera device; t represents the interval time required by the main camera equipment to shoot the target object until the distance between the target object and the main camera equipment is calculated; t represents time T, where T > nT and T% T ═ 0,% represents the remainder; n represents the corresponding sampling fitting number in the historical process of detecting the distance between the target object and the main camera equipment; a represents a preset integer variable;
step S202, using the following formula (2), pre-focusing is performed on the main image pickup apparatus based on the distance between the target object predicted at the next detection and the main image pickup apparatus and the distance between the target object obtained at the current detection and the main image pickup apparatus,
Figure BDA0003688329620000102
in the above equation (2), f (T + T) represents that the focal length of the main image pickup apparatus is pre-focused to f (T + T) at the time T + T; f (t) denotes a focal length of the main image pickup apparatus at time t; Δ f (T-a × T) represents a difference between an actual focal length value of the main image pickup apparatus after being pre-focused at time T-a × T and a desired pre-focusing focal length value of the main image pickup apparatus at time T-a × T;
step S203, before the next detection of the distance between the target object and the main camera device, the focal length of the main camera device is adjusted to f (T + T), and after the next detection of the distance between the target object and the main camera device is finished, the actual focal length value adjusted by the main camera device is obtained according to the actually detected distance between the target object and the main camera device by using the following formula (3), so as to calculate the difference value between the actual focal length value and the expected pre-focusing focal length value,
Figure BDA0003688329620000111
in the above equation (3), Δ f (T + T) represents a difference between the actual focal length value adjusted by the main image pickup apparatus at the time T + T and the desired pre-focus focal length value at the time T + T; l (T + T) represents a distance between the target object actually detected at time T + T and the main image pickup apparatus;
through the above process, the focal length of the main image pickup apparatus is adjusted to f (T + T) at time T + T to realize pre-focusing, and then the main image pickup apparatus is adjusted to an actual focal length value according to the distance L (T + T) between the target object actually detected at time T + T and the main image pickup apparatus
Figure BDA0003688329620000112
And calculating to obtain a difference value between the actual focal length value adjusted by the main camera equipment at the time T + T and the expected pre-focusing focal length value at the time T + T, so as to obtain fitting reference data for pre-focusing.
The beneficial effects of the above technical scheme are: predicting a distance value between the target object and the main camera equipment at the next ranging time according to the distance value between the target object and the main camera equipment recorded in the history by using the formula (1), thereby predicting the change condition of the distance of the target object in the ranging delay time period, and performing pre-focusing to eliminate delay influence as much as possible; then, the main shooting equipment is pre-focused by using the formula (2) according to the predicted distance value between the target object and the main shooting equipment at the next ranging time and the predicted distance value between the target object and the main shooting equipment at the current ranging time, so that the main shooting equipment can be conveniently adjusted to be close to the actual focal length in advance, the damage of the camera equipment again can not be caused by automatically adjusting the focal length too fast, and the service life of the equipment is prolonged; and finally, obtaining an actual focal length value according to the actually measured distance value between the target object and the main camera equipment by using the formula (3), and obtaining a difference value between the actual focal length value and the pre-focusing focal length to facilitate correction during subsequent pre-focusing, so that weighting fitting correction is performed according to historical focusing difference values during subsequent pre-focusing focal length obtaining, the pre-focusing focal length can be closer to the actual focal length, and automatic adjustment and intelligent control of the equipment are facilitated.
Preferably, in step S3, the analyzing the moving image of the target object and determining the motion state of the target object and the presence state of the target object on the monitoring screen of the main imaging device specifically include:
performing limb action posture recognition analysis processing on the target object moving image to determine a limb action posture of the target object;
and carrying out target object body contour recognition analysis processing on the target object moving image, and determining the existing area value of the body of the target object in the monitoring picture of the main camera equipment.
The beneficial effects of the above technical scheme are: by the method, the body movement posture of the target object and the existence area of the target object in the monitoring picture of the main camera equipment in the continuous activity process can be quantitatively judged, so that reliable basis is provided for subsequent alarming and switching of the main camera equipment.
Preferably, in step S3, determining whether a security accident event occurs currently according to the action state of the target object, so that the performing an alarm operation through the edge computing terminal specifically includes:
judging whether the target object carries out dangerous action or whether the target object falls down currently according to the limb action posture of the target object;
and when the target object is determined to be in dangerous action behavior at present or fall at present, sending an alarm message to the mobile terminal through the edge computing terminal.
The beneficial effects of the above technical scheme are: the limb action posture of the target object is compared with the preset limb action posture, whether the target object carries out dangerous action or whether the target object falls down currently is judged, and therefore the edge computing terminal is instructed to send an alarm message to the mobile terminal, and a user can find whether a safety accident event happens inside a home in time by looking over the mobile terminal.
Preferably, in step S4, determining whether a camera corresponding to the main imaging device needs to be changed or not according to the presence state of the target object in the monitoring screen of the main imaging device, so as to continuously monitor the activity state of the target object in different areas inside the home specifically includes:
comparing the existing area value of the body of the target object in the monitoring picture of the main camera equipment with a preset area threshold value, and if the existing area value is smaller than the preset area threshold value, switching and changing the camera corresponding to the main camera equipment again according to the step S2; and then, the redetermined main camera equipment is instructed to carry out tracking shooting on the target object, so that the activity states of the target object in different areas in the home are continuously monitored.
The beneficial effects of the above technical scheme are: if the existing area value of the body of the target object in the monitoring picture of the main camera device is smaller than the preset area threshold value, it is indicated that the target object has moved away from the area range corresponding to the monitoring shooting of the main camera device, at this time, the camera corresponding to the main camera device is switched and changed again according to the step S2, other cameras in the home can be timely instructed to track the target object, and the continuous monitoring of the activity state of the target object in different areas in the home is ensured.
As can be seen from the content of the above embodiment, in the edge-computing-based intelligent home monitoring method, all cameras in a home are connected to the same edge computing terminal, and it is determined whether a target object enters a monitoring screen of the cameras, so that the camera that captures the target object is set as a main camera device, and the main camera device is instructed to track and capture the target object to obtain a moving image of the target object; the method comprises the steps of analyzing and processing a moving image of a target object, determining the action state of the target object and the existence state of the target object in a monitoring picture of a main camera device, judging whether a safety accident event occurs at present, carrying out alarm operation through an edge computing terminal, switching and changing a camera corresponding to the main camera device according to the existence state of the target object in the monitoring picture of the main camera device, and realizing continuous monitoring on the activity states of the target object in different areas in a home.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (9)

1. The intelligent household monitoring method based on edge calculation is characterized by comprising the following steps:
step S1, all cameras installed in the home are connected to the same edge computing terminal, and the edge computing terminal is instructed to perform camera shooting state initialization operation on all the cameras; and indicating all cameras to enter a monitoring camera working mode through the edge computing terminal;
step S2, analyzing the images shot by each camera, determining whether the target object enters into the monitoring picture of the camera, and setting one of the cameras shot to obtain the target object as the main camera equipment; instructing the main camera equipment to perform tracking shooting on a target object to obtain a target object moving image;
step S3, analyzing the moving image of the target object, and determining an action state of the target object and an existence state of the target object on the monitoring screen of the main camera device; judging whether a safety accident event occurs at present according to the action state of the target object, and carrying out alarm operation through the edge computing terminal;
and step S4, determining whether a camera corresponding to the main camera needs to be switched and changed according to the existence state of the target object in the monitoring picture of the main camera, so as to continuously monitor the activity state of the target object in different areas in the home.
2. The intelligent home monitoring method based on edge computing as claimed in claim 1, wherein: in step S1, accessing all the cameras installed in the home to the same edge computing terminal, and instructing the edge computing terminal to perform the camera shooting state initialization operation on all the cameras specifically includes:
all cameras installed in a home are connected with the same edge computing terminal in a control instruction flow mode and an image data flow mode, so that the edge computing terminal sends control instructions to all the cameras independently and each camera uploads monitoring image data to the edge computing terminal independently;
and instructing the edge computing terminal to send a camera shooting initialization instruction to all cameras so that each camera recovers to a preset shooting focal length and a preset shooting field angle state.
3. The intelligent home monitoring method based on edge computing as claimed in claim 2, wherein: in step S1, instructing, by the edge computing terminal, all the cameras to enter a monitoring camera operation mode specifically includes:
and indicating all the cameras to enter a monitoring scanning camera shooting working mode through the edge computing terminal, wherein the scanning camera shooting period of the monitoring scanning camera shooting working mode of each camera is determined by the size of the indoor space where each camera is located.
4. The intelligent home monitoring method based on edge computing as claimed in claim 1, wherein: in step S2, analyzing the image captured by each camera to determine whether the target object enters into the monitoring screen of the camera, and setting one of the cameras that captures the target object as the primary camera specifically includes:
when all the cameras enter a monitoring camera working mode, acquiring images shot by each camera, performing first identification analysis processing on each image, and screening out a primary screening image set containing a target object in an image picture; performing second recognition analysis processing on all images contained in the primary screening image set, and determining the body area value of the target object existing in the image picture of each image; and setting the camera corresponding to the image with the maximum target object body area value as the main camera equipment.
5. The intelligent home monitoring method based on edge computing as claimed in claim 4, wherein: in step S2, instructing the main imaging device to perform tracking shooting on the target object, and obtaining a moving image of the target object specifically includes:
indicating an infrared sensor on the main camera equipment to track and position the target object and determining the position of the target object in the home;
and indicating the main camera equipment to carry out tracking shooting according to the position of the target object, and adjusting the shooting focal length of the main camera equipment according to the distance between the target object and the main camera equipment in the tracking shooting process.
6. The intelligent home monitoring method based on edge computing as claimed in claim 5, wherein: in step S2, the adjusting the shooting focal length of the main image capturing apparatus according to the distance between the target object and the main image capturing apparatus specifically includes:
performing pre-focusing processing within a time of detecting a distance between a target object and the main image pickup apparatus twice, the process being:
step S201, using the following formula (1), predicting the distance between the target object and the main camera device at the next detection according to the distance between the target object and the main camera device obtained at the current detection,
Figure FDA0003688329610000031
in the above formula (1), l (T + T) represents the distance between the target object predicted at the time T + T corresponding to the current detection and the main imaging apparatus; l (t) represents the distance between the target object at the time t corresponding to the last detection and the main imaging device; l (T-a x T) represents the distance between the target object detected at the time T-a x T and the main camera device; t represents the interval time required by the main camera equipment to shoot the target object until the distance between the target object and the main camera equipment is calculated; t represents time T, where T > nT and T% T ═ 0,% represents the remainder; n represents the corresponding sampling fitting number in the historical process of detecting the distance between the target object and the main camera equipment; a represents a preset integer variable;
step S202, using the following formula (2), performs pre-focusing on the main image pickup apparatus based on the distance between the target object and the main image pickup apparatus at the time of prediction of the next detection and the distance between the target object and the main image pickup apparatus obtained by the current detection,
Figure FDA0003688329610000032
in the above equation (2), f (T + T) represents that the focal length of the main image pickup apparatus is pre-focused to f (T + T) at the time T + T; f (t) denotes a focal length of the main image pickup apparatus at time t; Δ f (T-a × T) represents a difference between an actual focal length value after the pre-focusing by the main image pickup apparatus at the time T-a × T and a desired pre-focusing focal length value of the main image pickup apparatus at the time T-a × T;
step S203, before the next detection of the distance between the target object and the main camera device, the focal length of the main camera device is adjusted to f (T + T), and after the next detection of the distance between the target object and the main camera device is finished, the actual focal length value adjusted by the main camera device is obtained according to the actually detected distance between the target object and the main camera device by using the following formula (3), so as to calculate the difference value between the actual focal length value and the expected pre-focusing focal length value,
Figure FDA0003688329610000041
in the above equation (3), Δ f (T + T) represents a difference between an actual focal length value adjusted by the main image pickup apparatus at time T + T and a desired pre-focus focal length value at time T + T; l (T + T) represents a distance between the target object actually detected at time T + T and the main image pickup apparatus;
through the above process, the focal length of the main image pickup apparatus is adjusted to f (T + T) at time T + T to realize pre-focusing, and then the main image pickup apparatus is adjusted to an actual focal length value according to the distance L (T + T) between the target object actually detected at time T + T and the main image pickup apparatus
Figure FDA0003688329610000042
And calculating to obtain a difference value between the actual focal length value adjusted by the main camera equipment at the time T + T and the expected pre-focusing focal length value at the time T + T, so as to obtain fitting reference data for pre-focusing.
7. The intelligent home monitoring method based on edge computing as claimed in claim 1, wherein: in step S3, the analyzing the moving image of the target object and determining the action state of the target object and the existence state of the target object on the monitoring screen of the main imaging device specifically include:
performing limb action posture recognition analysis processing on the target object moving image to determine a limb action posture of the target object;
and carrying out target object body contour recognition analysis processing on the target object moving image, and determining the existing area value of the body of the target object in the monitoring picture of the main camera equipment.
8. The intelligent home monitoring method based on edge computing as claimed in claim 7, wherein: in step S3, determining whether a safety accident event occurs at present according to the action state of the target object, so that performing an alarm operation through the edge computing terminal specifically includes:
judging whether the target object carries out dangerous action or whether the target object falls down currently according to the limb action posture of the target object;
and when the target object is determined to be currently doing dangerous action behaviors or the current falling condition, sending an alarm message to the mobile terminal through the edge computing terminal.
9. The intelligent home monitoring method based on edge computing as claimed in claim 8, wherein: in step S4, determining whether it is necessary to switch and change the camera corresponding to the main imaging device according to the presence state of the target object in the monitoring screen of the main imaging device, so as to continuously monitor the activity state of the target object in different areas inside the home specifically includes:
comparing the existing area value of the body of the target object in the monitoring picture of the main camera equipment with a preset area threshold value, and if the existing area value is smaller than the preset area threshold value, switching and changing the camera corresponding to the main camera equipment again according to the step S2; and then, the redetermined main camera equipment is instructed to carry out tracking shooting on the target object, so that the activity states of the target object in different areas in the home are continuously monitored.
CN202210656869.6A 2022-06-10 2022-06-10 Household intelligent monitoring method based on edge calculation Active CN115103110B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210656869.6A CN115103110B (en) 2022-06-10 2022-06-10 Household intelligent monitoring method based on edge calculation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210656869.6A CN115103110B (en) 2022-06-10 2022-06-10 Household intelligent monitoring method based on edge calculation

Publications (2)

Publication Number Publication Date
CN115103110A true CN115103110A (en) 2022-09-23
CN115103110B CN115103110B (en) 2023-07-04

Family

ID=83291210

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210656869.6A Active CN115103110B (en) 2022-06-10 2022-06-10 Household intelligent monitoring method based on edge calculation

Country Status (1)

Country Link
CN (1) CN115103110B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107277459A (en) * 2017-07-29 2017-10-20 安徽博威康信息技术有限公司 A kind of camera views switching method recognized based on characteristics of human body with target following
CN108259703A (en) * 2017-12-31 2018-07-06 深圳市秦墨科技有限公司 A kind of holder with clapping control method, device and holder
CN108495028A (en) * 2018-03-14 2018-09-04 维沃移动通信有限公司 A kind of camera shooting focus adjustment method, device and mobile terminal
CN111385466A (en) * 2018-12-30 2020-07-07 浙江宇视科技有限公司 Automatic focusing method, device, equipment and storage medium
CN111857188A (en) * 2020-07-21 2020-10-30 南京航空航天大学 Aerial remote target follow-shooting system and method
CN112822444A (en) * 2021-01-05 2021-05-18 浪潮软件科技有限公司 Intelligent home security monitoring system and method based on home edge computing
CN113438457A (en) * 2021-08-26 2021-09-24 广州洛克韦陀安防科技有限公司 Home monitoring method and home monitoring system for improving warning accuracy
CN113705298A (en) * 2021-03-12 2021-11-26 腾讯科技(深圳)有限公司 Image acquisition method and device, computer equipment and storage medium
CN113705417A (en) * 2021-08-23 2021-11-26 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and computer readable storage medium
WO2021261656A1 (en) * 2020-06-25 2021-12-30 주식회사 자비스넷 Apparatus and system for providing security monitoring service based on edge computing, and operation method therefor
WO2022039323A1 (en) * 2020-08-20 2022-02-24 (주)오투원스 Device for high-speed zooming and focusing of camera continuously providing high-quality images by tracking and predicting moving object at high speed, and method for high-speed zooming and focusing of camera using same

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107277459A (en) * 2017-07-29 2017-10-20 安徽博威康信息技术有限公司 A kind of camera views switching method recognized based on characteristics of human body with target following
CN108259703A (en) * 2017-12-31 2018-07-06 深圳市秦墨科技有限公司 A kind of holder with clapping control method, device and holder
CN108495028A (en) * 2018-03-14 2018-09-04 维沃移动通信有限公司 A kind of camera shooting focus adjustment method, device and mobile terminal
CN111385466A (en) * 2018-12-30 2020-07-07 浙江宇视科技有限公司 Automatic focusing method, device, equipment and storage medium
WO2021261656A1 (en) * 2020-06-25 2021-12-30 주식회사 자비스넷 Apparatus and system for providing security monitoring service based on edge computing, and operation method therefor
CN111857188A (en) * 2020-07-21 2020-10-30 南京航空航天大学 Aerial remote target follow-shooting system and method
WO2022039323A1 (en) * 2020-08-20 2022-02-24 (주)오투원스 Device for high-speed zooming and focusing of camera continuously providing high-quality images by tracking and predicting moving object at high speed, and method for high-speed zooming and focusing of camera using same
CN112822444A (en) * 2021-01-05 2021-05-18 浪潮软件科技有限公司 Intelligent home security monitoring system and method based on home edge computing
CN113705298A (en) * 2021-03-12 2021-11-26 腾讯科技(深圳)有限公司 Image acquisition method and device, computer equipment and storage medium
CN113705417A (en) * 2021-08-23 2021-11-26 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN113438457A (en) * 2021-08-26 2021-09-24 广州洛克韦陀安防科技有限公司 Home monitoring method and home monitoring system for improving warning accuracy

Also Published As

Publication number Publication date
CN115103110B (en) 2023-07-04

Similar Documents

Publication Publication Date Title
CN109104561B (en) System and method for tracking moving objects in a scene
CN109040709B (en) Video monitoring method and device, monitoring server and video monitoring system
KR102126498B1 (en) Apparatus, system and method for detecting dangerous situation based on image recognition
JP4241742B2 (en) Automatic tracking device and automatic tracking method
US8531525B2 (en) Surveillance system and method for operating same
CN101166239B (en) Image processing system and method for improving repeatability
EP2549738B1 (en) Method and camera for determining an image adjustment parameter
US8818055B2 (en) Image processing apparatus, and method, and image capturing apparatus with determination of priority of a detected subject and updating the priority
US9041800B2 (en) Confined motion detection for pan-tilt cameras employing motion detection and autonomous motion tracking
US20190295243A1 (en) Method and system for automated video image focus change detection and classification
CN109376601B (en) Object tracking method based on high-speed ball, monitoring server and video monitoring system
CN102348102B (en) Roof safety monitoring system and method thereof
CN110633612B (en) Monitoring method and system for inspection robot
KR20120124785A (en) Object tracking system for tracing path of object and method thereof
US10719717B2 (en) Scan face of video feed
KR20190016900A (en) Information processing apparatus, information processing method, and storage medium
CN112954315A (en) Image focusing measurement method and system for security camera
CN101923762A (en) Video monitor system and method
CN109905641B (en) Target monitoring method, device, equipment and system
KR20150130901A (en) Camera apparatus and method of object tracking using the same
CN113630543A (en) Falling object and person smashing event monitoring method and device, electronic equipment and monitoring system
KR20160048428A (en) Method and Apparatus for Playing Video by Using Pan-Tilt-Zoom Camera
CN112489338B (en) Alarm method, system, device, equipment and storage medium
CN115103110B (en) Household intelligent monitoring method based on edge calculation
KR100871833B1 (en) Camera apparatus for auto tracking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant