CN116624065B - Automatic folding regulation and control method for intelligent doors and windows - Google Patents

Automatic folding regulation and control method for intelligent doors and windows Download PDF

Info

Publication number
CN116624065B
CN116624065B CN202310889591.1A CN202310889591A CN116624065B CN 116624065 B CN116624065 B CN 116624065B CN 202310889591 A CN202310889591 A CN 202310889591A CN 116624065 B CN116624065 B CN 116624065B
Authority
CN
China
Prior art keywords
image
camera
gray
acquiring
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310889591.1A
Other languages
Chinese (zh)
Other versions
CN116624065A (en
Inventor
孙晋明
王令军
任重量
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Zhiying Door & Window Technology Co ltd
Original Assignee
Shandong Zhiying Door & Window Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Zhiying Door & Window Technology Co ltd filed Critical Shandong Zhiying Door & Window Technology Co ltd
Priority to CN202310889591.1A priority Critical patent/CN116624065B/en
Publication of CN116624065A publication Critical patent/CN116624065A/en
Application granted granted Critical
Publication of CN116624065B publication Critical patent/CN116624065B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • EFIXED CONSTRUCTIONS
    • E05LOCKS; KEYS; WINDOW OR DOOR FITTINGS; SAFES
    • E05FDEVICES FOR MOVING WINGS INTO OPEN OR CLOSED POSITION; CHECKS FOR WINGS; WING FITTINGS NOT OTHERWISE PROVIDED FOR, CONCERNED WITH THE FUNCTIONING OF THE WING
    • E05F15/00Power-operated mechanisms for wings
    • E05F15/70Power-operated mechanisms for wings with automatic actuation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/34Smoothing or thinning of the pattern; Morphological operations; Skeletonisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/72Data preparation, e.g. statistical preprocessing of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/02Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]

Abstract

The invention relates to the technical field of door and window regulation, and particularly discloses an automatic folding regulation method of an intelligent door and window, which comprises the steps of acquiring a shooting image of a camera in real time based on a layer template; carrying out feature recognition on the shot image, and extracting image features; acquiring environmental parameters of a target area in real time, and training a neural network model according to the environmental parameters, the image characteristics and a station state input by a manager; when receiving an application instruction input by a manager, outputting a control instruction based on the trained neural network model and sending the control instruction to the intelligent door and window. According to the invention, a camera is arranged near an original intelligent door and window based on building data, images are acquired by the camera, image characteristics are extracted, and then, comprehensive characteristics can be obtained by combining environmental parameters acquired by a sensor, and a neural network model is trained based on the comprehensive characteristics and the door and window state; the automatic control system can be built by means of the neural network model, and the intelligent level of the intelligent door and window is greatly improved.

Description

Automatic folding regulation and control method for intelligent doors and windows
Technical Field
The invention relates to the technical field of door and window adjustment, in particular to an automatic folding adjustment and control method for an intelligent door and window.
Background
In a building with a large area, the number of doors and windows is extremely large, the switching task amount per day is large, the working pressure of staff is large, and in the face of the situation, the existing large-area building adopts intelligent doors and windows which can be intelligently controlled, so that the working pressure of the staff is slowed down, however, the intelligence of the existing intelligent doors and windows is only reflected in a switching control link, a specific switching action control person or a specific switching action staff is needed, and the staff needs to regularly adjust the doors and windows according to weather conditions; at this time, a door and window corresponds to a button, and when the door and window quantity is great, button quantity is also great, and staff's work complexity still is very high.
How to further improve the intelligent level of the existing intelligent doors and windows, and to improve the instantaneity while reducing the workload of workers is a technical problem to be solved by the technical scheme of the invention.
Disclosure of Invention
The invention aims to provide an automatic folding regulation and control method for intelligent doors and windows, which aims to solve the problems in the background technology.
In order to achieve the above purpose, the present invention provides the following technical solutions:
an automatic folding regulation and control method for intelligent doors and windows, comprising the following steps:
reading building data of a target area, acquiring the installation position of an intelligent door and window and the relative position of a camera based on the building data, and acquiring an acquisition range according to the installation position and the relative position;
establishing a layer template according to the building data and the acquisition range, and acquiring a shooting image of a camera in real time based on the layer template;
carrying out feature recognition on the shot image, and extracting image features;
acquiring environmental parameters of a target area in real time, and training a neural network model according to the environmental parameters, the image characteristics and a station state input by a manager;
when receiving an application instruction input by a manager, outputting a control instruction based on the trained neural network model and sending the control instruction to the intelligent door and window;
the process of acquiring the acquisition range according to the installation position and the relative position comprises the following steps:
reading the acquisition wide angle and the acquisition distance of the camera, and converting the acquisition wide angle and the acquisition distance into an acquisition space according to an optical propagation principle; the collecting space is a cone;
and calculating the intersection of the acquisition space and the horizontal plane to obtain the acquisition range.
As a further scheme of the invention: the step of reading the building data of the target area, acquiring the installation position of the intelligent door and window and the relative position of the camera based on the building data, and acquiring the acquisition range according to the installation position and the relative position comprises the following steps:
receiving an area tag input by a management party and a query authority granted by the management party, and acquiring building data of a target area according to the area tag and the query authority;
building a display model based on building data, and receiving the installation position of the intelligent door and window and the relative position of the camera input by a management party based on the display model; when the building data contains the installation position of the intelligent door and window and the relative position of the camera, the data input by the management party is confirmation information;
determining the absolute position of the camera according to the installation position and the relative position, reading the working parameters of the camera, and calculating the acquisition range according to the absolute position and the working parameters; the camera is a gun camera.
As a further scheme of the invention: the step of determining the absolute position of the camera according to the installation position and the relative position, reading the working parameters of the camera, and calculating the acquisition range according to the absolute position and the working parameters comprises the following steps:
when the camera is a dome camera, reading working parameters of the camera, which contain time stamps;
and calculating the acquisition range containing the time stamp according to the absolute position and the working parameter containing the time stamp.
As a further scheme of the invention: the step of establishing a layer template according to the building data and the acquisition range and acquiring the shooting image of the camera in real time based on the layer template comprises the following steps:
determining an inner layer boundary according to the building data, and counting the acquisition range based on the inner layer boundary to obtain a layer template; wherein the inner layer boundary, the acquisition range and the layer template are all three-dimensional data;
creating data cache queues with the same number as the cameras, and acquiring shooting images of the cameras based on the data cache queues; the shot image contains time information;
and reading the shot images in the data cache queue according to the preset time frequency, and filling the shot images into the layer template.
As a further scheme of the invention: the step of establishing a layer template according to the building data and the acquisition range and acquiring the shooting image of the camera based on the layer template in real time further comprises the following steps:
receiving a detection boundary input by a user, and determining a detection ring according to the detection boundary and the inner layer boundary;
calculating intersection and union between the acquisition ranges based on the detection ring;
when intersection exists in any two acquisition ranges, a self-checking instruction is sent to the two corresponding cameras;
and calculating a difference set between the detection ring and the union, and determining auxiliary equipment and mounting points thereof according to the difference set.
As a further scheme of the invention: the step of carrying out feature recognition on the shot image and extracting the image features comprises the following steps:
converting the shot image into a gray image according to a preset gray formula;
traversing the gray level image according to the preset subarea size, and carrying out smoothing treatment on the gray level image;
counting gray spans in the gray images after the smoothing treatment, and building a span square matrix according to the gray spans; the row and column number of the span square matrix is the same as the gray scale span;
and extracting image features based on the span square matrix.
As a further scheme of the invention: the step of traversing the gray image according to the preset subarea size and performing smoothing processing on the gray image comprises the following steps:
traversing the gray image according to a preset subarea size;
calculating a gray average value and a gray median value in the subarea, and calculating a difference value of the gray average value and the gray median value;
when the difference value is smaller than a preset threshold value, replacing the value of each point in the subarea based on the gray average value;
and when the difference value is larger than a preset threshold value, replacing the value of each point in the subarea based on the gray median.
As a further scheme of the invention: the step of extracting image features based on the span square matrix comprises the following steps:
acquiring operation parameters of an operator, and determining a single operation range according to the operation parameters;
extracting a block to be detected based on a single operation range, and inputting element values in the block to be detected into a preset characteristic calculation formula to obtain characteristic values;
counting the characteristic values of all the blocks to be detected to obtain a characteristic matrix which is used as image characteristics;
wherein, the characteristic calculation formula is:
wherein, C is the calculated characteristic value,for the element value of the ith column and jth column in the block to be tested, ">Is the product of i and j; />For the mean value of the i-th line element value in the block to be detected, < > in->Is the standard deviation of the element value of the ith row in the block to be detected, < >>For the mean value of the i-th line element value in the block to be detected, < > in->The standard deviation of the element value of the ith row in the block to be detected.
As a further scheme of the invention: the step of acquiring the environmental parameters of the target area in real time and training the neural network model according to the environmental parameters, the image characteristics and the station state input by the manager comprises the following steps:
acquiring environmental parameters of a target area in real time based on a preset sensor;
inquiring image characteristics at the same moment, and connecting the image characteristics with the environment parameters to obtain comprehensive characteristics;
receiving station states of each intelligent door and window targeted input by a management party, and creating a training set and a testing set of comprehensive characteristics-station states;
and training a neural network model based on the training set and the test set.
As a further scheme of the invention: when receiving an application instruction input by a manager, the step of outputting a control instruction based on the trained neural network model and sending the control instruction to the intelligent door and window comprises the following steps:
when an application instruction input by a management party is received, reading environmental parameters and a shot image at the current moment;
extracting image features of a shot image, inputting environment parameters and the image features into a trained neural network model, and outputting a station state;
reading the detection result of the auxiliary equipment, and verifying the station state according to the detection result;
and when the verification is passed, determining a control instruction according to the station state and sending the control instruction to the intelligent door and window.
Compared with the prior art, the invention has the beneficial effects that: according to the invention, a camera is arranged near an original intelligent door and window based on building data, images are acquired by the camera, image features are extracted, and then, comprehensive features can be obtained by combining environmental parameters acquired by a sensor, then, door and window states under different comprehensive features are determined by a management party, and a neural network model can be trained based on the comprehensive features and the door and window states; finally, an automatic control system can be built by means of the neural network model, and the intelligent level of the intelligent door and window is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments of the present invention.
Fig. 1 is a flow chart of an automatic folding control method for intelligent doors and windows.
Fig. 2 is a first sub-flowchart of the automatic folding control method for the intelligent door and window.
Fig. 3 is a second sub-flowchart of the automatic folding control method for the intelligent door and window.
Fig. 4 is a third sub-flowchart of the automatic folding control method for the intelligent door and window.
Fig. 5 is a fourth sub-flowchart of the automatic folding control method for the intelligent door and window.
Detailed Description
In order to make the technical problems, technical schemes and beneficial effects to be solved more clear, the invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
Fig. 1 is a flow chart of an automatic folding control method for an intelligent door and window, in an embodiment of the invention, the method includes:
step S100: reading building data of a target area, acquiring the installation position of an intelligent door and window and the relative position of a camera based on the building data, and acquiring an acquisition range according to the installation position and the relative position;
the target area is an area for installing intelligent doors and windows, comprises workshops, workshops and the like, the building data of the target area are existing data, and the reading process is not difficult; acquiring the installation position of the intelligent door and window and the relative position of the camera based on the read building data, wherein the camera is generally installed near the intelligent door and window, so that the position of the camera is acquired by taking the intelligent door and window as a reference and is called the relative position; according to the installation position of the intelligent door and window and the relative position of the camera, the acquisition range of the camera can be determined.
The process of acquiring the acquisition range according to the installation position and the relative position comprises the following steps:
reading the acquisition wide angle and the acquisition distance of the camera, and converting the acquisition wide angle and the acquisition distance into an acquisition space according to an optical propagation principle; the collecting space is a cone;
and calculating the intersection of the acquisition space and the horizontal plane to obtain the acquisition range.
Step S200: establishing a layer template according to the building data and the acquisition range, and acquiring a shooting image of a camera in real time based on the layer template;
according to the statistical collection range of the building data, a layer template can be established, and the layer template is used for counting the shooting data of all cameras.
Step S300: carrying out feature recognition on the shot image, and extracting image features;
and identifying the counted shot images, and extracting features in the shot images for reflecting the current image state.
Step S400: acquiring environmental parameters of a target area in real time, and training a neural network model according to the environmental parameters, the image characteristics and a station state input by a manager;
the image state has high correlation with the environment parameters, especially the illumination parameters, the comprehensive characteristics are determined according to the environment parameters and the extracted image characteristics, the station state input by the manager is received as a label, and then a mapping relation can be trained by means of the existing neural network model.
Step S500: when receiving an application instruction input by a manager, outputting a control instruction based on the trained neural network model and sending the control instruction to the intelligent door and window;
step S500 is a specific application process, in actual application, a final station state can be obtained according to environmental parameters and image features acquired by the intelligent equipment by means of the trained mapping relation, and then a control instruction is generated.
The station state is the opening size of each intelligent door and window, and the control instruction is the control instruction required by reaching the opening size.
Fig. 2 is a first sub-flowchart of the automatic folding regulation method for intelligent doors and windows, wherein the steps of reading building data of a target area, acquiring the installation position of the intelligent doors and windows and the relative position of a camera based on the building data, and acquiring the acquisition range according to the installation position and the relative position include:
step S101: receiving an area tag input by a management party and a query authority granted by the management party, and acquiring building data of a target area according to the area tag and the query authority;
the management side inputs the area tag and the query authority, and the execution body of the method can query the building data of the target area corresponding to the area tag according to the query authority.
Step S102: building a display model based on building data, and receiving the installation position of the intelligent door and window and the relative position of the camera input by a management party based on the display model; when the building data contains the installation position of the intelligent door and window and the relative position of the camera, the data input by the management party is confirmation information;
converting building data into a display model according to a preset scale, wherein the display model is a three-dimensional model, and the three-dimensional model receives the installation position and the relative position input by a management party; if the building data contains the installation position, the intelligent doors and windows will be marked in the display model, and at this time, the manager can input confirmation information or adjustment information.
Step S103: determining the absolute position of the camera according to the installation position and the relative position, reading the working parameters of the camera, and calculating the acquisition range according to the absolute position and the working parameters; the camera is a gun camera.
The absolute position of the camera can be determined according to the installation position and the relative position, and the acquisition range of the camera can be calculated according to the absolute position and the working parameters; the camera generally adopts a fixed camera (gun camera) and can shoot images with higher precision.
As a preferred embodiment of the technical scheme of the present invention, the step of determining the absolute position of the camera according to the installation position and the relative position, reading the working parameters of the camera, and calculating the acquisition range according to the absolute position and the working parameters comprises:
when the camera is a dome camera, reading working parameters of the camera, which contain time stamps;
calculating an acquisition range containing a time stamp according to the absolute position and the working parameter containing the time stamp;
in one example of the technical solution of the present invention, the fixed camera may be converted into the steering camera, and in this case, even the same camera, the acquisition ranges at different moments are different, so that a time tag needs to be introduced into the acquisition ranges.
In the above, the calculation process of the acquisition range is the same as that provided in step S100, except that a time stamp is introduced because the acquisition range of the bolt does not change, but the acquisition range of the dome camera changes with time.
Fig. 3 is a second sub-flowchart of the automatic folding regulation method for intelligent doors and windows, wherein the step of establishing a layer template according to the building data and the collection range and acquiring a shooting image of a camera in real time based on the layer template comprises the following steps:
step S201: determining an inner layer boundary according to the building data, and counting the acquisition range based on the inner layer boundary to obtain a layer template; wherein the inner layer boundary, the acquisition range and the layer template are all three-dimensional data;
the boundary of the building data is called an inner boundary, and the camera generally shoots an image outside the building, and based on the image, a larger range can be obtained by counting the acquisition range with the inner boundary.
Step S202: creating data cache queues with the same number as the cameras, and acquiring shooting images of the cameras based on the data cache queues; the shot image contains time information;
in the data acquisition process of the camera, data needs to be read from the data cache queue by means of the data cache queue, so that the consistency of all shot images in time can be ensured.
Step S203: reading a shot image in a data cache queue according to a preset time frequency, and filling the shot image into a layer template;
and reading the corresponding shooting images at the same moment in the data buffer queue based on the preset time frequency, and filling the shooting images into the generated layer template.
As a preferred embodiment of the technical solution of the present invention, the step of establishing a layer template according to the building data and the collection range, and acquiring the photographed image of the camera in real time based on the layer template further includes:
receiving a detection boundary input by a user, and determining a detection ring according to the detection boundary and the inner layer boundary;
calculating intersection and union between the acquisition ranges based on the detection ring;
when intersection exists in any two acquisition ranges, a self-checking instruction is sent to the two corresponding cameras;
calculating a difference set between the detection ring and the union, and determining auxiliary equipment and installation points thereof according to the difference set;
for the camera, the acquisition range is large, long-distance information can be acquired, the acquisition range is counted by using the inner layer boundary, and the size of the obtained layer template is extremely large and is often a default boundary; in one example of the technical scheme of the invention, a detection boundary is uploaded by a management party, and an identification process is limited between an inner layer boundary and the detection boundary and reflected in a photographed image, namely, part of contents in the photographed image is intercepted.
Calculating the intersection between different acquisition ranges, if the intersection exists, indicating that the corresponding two cameras shoot images of the same area, wherein the shooting processes of the two cameras can be mutually compared at the moment and are used for detecting the shooting process.
Meanwhile, the union sets among different acquisition ranges are calculated, the union sets and the whole detection ring are compared, the blind area can be clearly judged, and additional auxiliary equipment can be installed on the blind area by the management policy and used for detecting the blind area.
Fig. 4 is a third sub-flowchart of the automatic folding regulation method for intelligent doors and windows, wherein the step of performing feature recognition on the photographed image and extracting the image features includes:
step S301: converting the shot image into a gray image according to a preset gray formula;
the gray scale formula is an existing gray scale formula, wherein corresponding parameters related to RGB values can be finely adjusted, and the conversion process is uniform.
Step S302: traversing the gray level image according to the preset subarea size, and carrying out smoothing treatment on the gray level image;
in order to facilitate the subsequent processing, the gray-scale image is first subjected to smoothing processing, and the purpose of the smoothing processing is to reduce the data amount of the image and make the gray-scale value difference between adjacent pixels smaller.
Step S303: counting gray spans in the gray images after the smoothing treatment, and building a span square matrix according to the gray spans; the row and column number of the span square matrix is the same as the gray scale span;
counting the gray span after the smoothing treatment, wherein the gray span comprises a maximum gray value and a minimum gray value, and determining the number of rows and the number of columns according to the difference value of the maximum gray value and the minimum gray value so that each gray value corresponds to one row or one column; then, traversing the gray level image according to a preset direction, and sequentially acquiring gray level value change conditions of adjacent pixel points, for example, changing from 245 to 247, wherein at the moment, the values at the (245, 247) and (247, 245) positions in the span square matrix are increased by one; when the traversal is completed, the span square matrix reflects the change condition of each pixel point in the gray level image after the smoothing processing.
Step S304: extracting image features based on the span square matrix;
and analyzing the span square matrix to obtain image characteristics.
As a preferred embodiment of the present invention, the step of traversing the gray scale image according to a preset sub-region size and performing smoothing on the gray scale image includes:
traversing the gray image according to a preset subarea size;
calculating a gray average value and a gray median value in the subarea, and calculating a difference value of the gray average value and the gray median value;
when the difference value is smaller than a preset threshold value, replacing the value of each point in the subarea based on the gray average value;
and when the difference value is larger than a preset threshold value, replacing the value of each point in the subarea based on the gray median.
The principle of smoothing is not difficult, namely the central value is replaced by a value in an area, and the replacing process can be replaced by a mean value in the area or a median value in the area; the selection mode is as follows: if the difference between the mean value and the median value is not large, the mean value is used, and if the difference between the mean value and the median value is large, the median value is used; of course, if an all-purpose mean or an all-purpose median is also a viable solution, the smoothness in this way is slightly reduced, but the smoothing rate is greatly increased.
Further, the step of extracting image features based on the span square matrix includes:
acquiring operation parameters of an operator, and determining a single operation range according to the operation parameters;
extracting a block to be detected based on a single operation range, and inputting element values in the block to be detected into a preset characteristic calculation formula to obtain characteristic values;
counting the characteristic values of all the blocks to be detected to obtain a characteristic matrix which is used as image characteristics;
in the operation process, the whole span square matrix can be processed, and the single calculation process of the mode consumes long time, so that the span square matrix is firstly segmented according to operation parameters, then each data block is calculated to obtain a characteristic value, the characteristic value is counted according to the segmentation result, and the characteristic matrix can be obtained and can be used as an image characteristic.
Wherein, the characteristic calculation formula is:
wherein, C is the calculated characteristic value,for the element value of the ith column and jth column in the block to be tested, ">Is the product of i and j; />For the mean value of the i-th line element value in the block to be detected, < > in->Is the standard deviation of the element value of the ith row in the block to be detected, < >>For the mean value of the i-th line element value in the block to be detected, < > in->The standard deviation of the element value of the ith row in the block to be detected.
The output of the feature calculation formula is a feature value, and the feature value is used for measuring the similarity degree of the gray level of the image in the row or column direction, so that the magnitude of the feature value reflects the local gray level correlation, and the larger the value is, the larger the correlation is, which is a very important feature in the identification process of the environment image, and is particularly used for representing the change condition of each region in the image.
Fig. 5 is a fourth sub-flowchart of the automatic folding and controlling method for intelligent doors and windows, wherein the step of acquiring the environmental parameters of the target area in real time and training the neural network model according to the environmental parameters, the image features and the station states input by the manager includes:
step S401: acquiring environmental parameters of a target area in real time based on a preset sensor;
step S402: inquiring image characteristics at the same moment, and connecting the image characteristics with the environment parameters to obtain comprehensive characteristics;
step S403: receiving station states of each intelligent door and window targeted input by a management party, and creating a training set and a testing set of comprehensive characteristics-station states;
step S404: and training a neural network model based on the training set and the test set.
The training process of the neural network model is specifically described in the above, the key point of the neural network model is a feature extraction process, the station state is input by a management party, the environmental parameters are acquired by a sensor, the image features are extracted by the extraction rule, the environmental parameters and the image features are connected to obtain comprehensive features, the comprehensive features are taken as input, the station state is taken as output to train the neural network model, and the specific training process is a common supervised learning process and can be achieved by adopting the prior art.
Wherein, regarding the generating process of the comprehensive characteristics, it is a set of image characteristics and environment parameters, and is a self-defined data structure, reference may be made to a structure body (Struct) in the C language, where the structure body contains the image characteristics and the environment parameters.
As a preferred embodiment of the technical scheme of the present invention, when receiving an application instruction input by a manager, the step of outputting a control instruction based on a trained neural network model and sending the control instruction to an intelligent door and window includes:
when an application instruction input by a management party is received, reading environmental parameters and a shot image at the current moment;
extracting image features of a shot image, inputting environment parameters and the image features into a trained neural network model, and outputting a station state;
reading the detection result of the auxiliary equipment, and verifying the station state according to the detection result;
and when the verification is passed, determining a control instruction according to the station state and sending the control instruction to the intelligent door and window.
The method comprises the steps that the specific model application process is adopted, when an application instruction input by a management party is received, a shooting image obtained by a camera is read, image feature extraction is carried out, and environment parameters obtained by a sensor are combined and input into a trained neural network model, so that a station state can be output; in the process, by means of the auxiliary equipment determined in the previous step, verification works such as whether the intelligent door and window are abnormal, whether the camera is blocked or not and the like can be completed.
It should be noted that, from the above description of the embodiments, those skilled in the art will clearly understand that each embodiment may be implemented by means of software plus necessary general hardware platform. Based on this understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method described in the respective embodiments or some parts of the embodiments. In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (7)

1. An automatic folding regulation and control method for intelligent doors and windows is characterized by comprising the following steps:
reading building data of a target area, acquiring the installation position of an intelligent door and window and the relative position of a camera based on the building data, and acquiring an acquisition range according to the installation position and the relative position;
establishing a layer template according to the building data and the acquisition range, and acquiring a shooting image of a camera in real time based on the layer template;
carrying out feature recognition on the shot image, and extracting image features;
acquiring environmental parameters of a target area in real time, and training a neural network model according to the environmental parameters, the image characteristics and a station state input by a manager;
when receiving an application instruction input by a manager, outputting a control instruction based on the trained neural network model and sending the control instruction to the intelligent door and window;
the process of acquiring the acquisition range according to the installation position and the relative position comprises the following steps:
reading the acquisition wide angle and the acquisition distance of the camera, and converting the acquisition wide angle and the acquisition distance into an acquisition space according to an optical propagation principle; the collecting space is a cone;
calculating an intersection of the acquisition space and the horizontal plane to obtain an acquisition range;
the step of establishing a layer template according to the building data and the acquisition range and acquiring the shooting image of the camera in real time based on the layer template comprises the following steps:
determining an inner layer boundary according to the building data, and counting the acquisition range based on the inner layer boundary to obtain a layer template; wherein the inner layer boundary, the acquisition range and the layer template are all three-dimensional data;
creating data cache queues with the same number as the cameras, and acquiring shooting images of the cameras based on the data cache queues; the shot image contains time information;
reading a shot image in a data cache queue according to a preset time frequency, and filling the shot image into a layer template;
the step of establishing a layer template according to the building data and the acquisition range and acquiring the shooting image of the camera based on the layer template in real time further comprises the following steps:
receiving a detection boundary input by a user, and determining a detection ring according to the detection boundary and the inner layer boundary;
calculating intersection and union between the acquisition ranges based on the detection ring;
when intersection exists in any two acquisition ranges, a self-checking instruction is sent to the two corresponding cameras;
calculating a difference set between the detection ring and the union, and determining auxiliary equipment and installation points thereof according to the difference set;
the step of acquiring the environmental parameters of the target area in real time and training the neural network model according to the environmental parameters, the image characteristics and the station state input by the manager comprises the following steps:
acquiring environmental parameters of a target area in real time based on a preset sensor;
inquiring image characteristics at the same moment, and connecting the image characteristics with the environment parameters to obtain comprehensive characteristics;
receiving station states of each intelligent door and window targeted input by a management party, and creating a training set and a testing set of comprehensive characteristics-station states;
and training a neural network model based on the training set and the test set.
2. The automatic folding adjustment and control method for intelligent doors and windows according to claim 1, wherein the step of reading building data of a target area, acquiring an installation position of the intelligent doors and windows and a relative position of a camera based on the building data, and acquiring an acquisition range according to the installation position and the relative position comprises:
receiving an area tag input by a management party and a query authority granted by the management party, and acquiring building data of a target area according to the area tag and the query authority;
building a display model based on building data, and receiving the installation position of the intelligent door and window and the relative position of the camera input by a management party based on the display model; when the building data contains the installation position of the intelligent door and window and the relative position of the camera, the data input by the management party is confirmation information;
determining the absolute position of the camera according to the installation position and the relative position, reading the working parameters of the camera, and calculating the acquisition range according to the absolute position and the working parameters; the camera is a gun camera.
3. The automatic folding and adjusting method of intelligent doors and windows according to claim 2, wherein the step of determining the absolute position of the camera according to the installation position and the relative position, reading the working parameters of the camera, and calculating the acquisition range according to the absolute position and the working parameters comprises the steps of:
when the camera is a dome camera, reading working parameters of the camera, which contain time stamps;
and calculating the acquisition range containing the time stamp according to the absolute position and the working parameter containing the time stamp.
4. The automatic folding adjustment and control method for intelligent doors and windows according to claim 1, wherein the step of performing feature recognition on the photographed image and extracting image features comprises the steps of:
converting the shot image into a gray image according to a preset gray formula;
traversing the gray level image according to the preset subarea size, and carrying out smoothing treatment on the gray level image;
counting gray spans in the gray images after the smoothing treatment, and building a span square matrix according to the gray spans; the row and column number of the span square matrix is the same as the gray scale span;
and extracting image features based on the span square matrix.
5. The automatic folding adjustment and control method for intelligent doors and windows according to claim 4, wherein the step of traversing the gray scale image according to a preset sub-region size and smoothing the gray scale image comprises the steps of:
traversing the gray image according to a preset subarea size;
calculating a gray average value and a gray median value in the subarea, and calculating a difference value of the gray average value and the gray median value;
when the difference value is smaller than a preset threshold value, replacing the value of each point in the subarea based on the gray average value;
and when the difference value is larger than a preset threshold value, replacing the value of each point in the subarea based on the gray median.
6. The automatic folding adjustment and control method for intelligent doors and windows according to claim 4, wherein the step of extracting image features based on the span square matrix comprises the steps of:
acquiring operation parameters of an operator, and determining a single operation range according to the operation parameters;
extracting a block to be detected based on a single operation range, and inputting element values in the block to be detected into a preset characteristic calculation formula to obtain characteristic values;
counting the characteristic values of all the blocks to be detected to obtain a characteristic matrix which is used as image characteristics;
wherein, the characteristic calculation formula is:
wherein, C is the calculated characteristic value,for the element value of the ith column and jth column in the block to be tested, ">Is the product of i and j; />For the mean value of the i-th line element value in the block to be detected, < > in->Is the standard deviation of the element value of the ith row in the block to be detected,for the mean value of the i-th line element value in the block to be detected, < > in->The standard deviation of the element value of the ith row in the block to be detected.
7. The automatic folding and controlling method for intelligent doors and windows according to claim 1, wherein when receiving the application command input by the manager, the step of outputting the control command based on the trained neural network model and transmitting the control command to the intelligent doors and windows comprises:
when an application instruction input by a management party is received, reading environmental parameters and a shot image at the current moment;
extracting image features of a shot image, inputting environment parameters and the image features into a trained neural network model, and outputting a station state;
reading the detection result of the auxiliary equipment, and verifying the station state according to the detection result;
and when the verification is passed, determining a control instruction according to the station state and sending the control instruction to the intelligent door and window.
CN202310889591.1A 2023-07-20 2023-07-20 Automatic folding regulation and control method for intelligent doors and windows Active CN116624065B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310889591.1A CN116624065B (en) 2023-07-20 2023-07-20 Automatic folding regulation and control method for intelligent doors and windows

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310889591.1A CN116624065B (en) 2023-07-20 2023-07-20 Automatic folding regulation and control method for intelligent doors and windows

Publications (2)

Publication Number Publication Date
CN116624065A CN116624065A (en) 2023-08-22
CN116624065B true CN116624065B (en) 2023-10-13

Family

ID=87638516

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310889591.1A Active CN116624065B (en) 2023-07-20 2023-07-20 Automatic folding regulation and control method for intelligent doors and windows

Country Status (1)

Country Link
CN (1) CN116624065B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116876950B (en) * 2023-09-05 2023-12-05 山东智赢门窗科技有限公司 Intelligent door and window control system and method, computer equipment and storage medium
CN117237383B (en) * 2023-11-15 2024-02-02 山东智赢门窗科技有限公司 Intelligent door and window control method and system based on indoor environment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657648A (en) * 2019-01-10 2019-04-19 天津大学 A kind of System and method for of real-time monitoring office building window situation
CN110837802A (en) * 2019-11-06 2020-02-25 齐鲁工业大学 Facial image feature extraction method based on gray level co-occurrence matrix
CN111985518A (en) * 2020-02-18 2020-11-24 广东三维家信息科技有限公司 Door and window detection method and model training method and device thereof
KR20210067498A (en) * 2019-11-29 2021-06-08 한동대학교 산학협력단 Method and system for automatically detecting objects in image based on deep learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109657648A (en) * 2019-01-10 2019-04-19 天津大学 A kind of System and method for of real-time monitoring office building window situation
CN110837802A (en) * 2019-11-06 2020-02-25 齐鲁工业大学 Facial image feature extraction method based on gray level co-occurrence matrix
KR20210067498A (en) * 2019-11-29 2021-06-08 한동대학교 산학협력단 Method and system for automatically detecting objects in image based on deep learning
CN111985518A (en) * 2020-02-18 2020-11-24 广东三维家信息科技有限公司 Door and window detection method and model training method and device thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
高分辨率图像特征提取和VG重构技术研究;童庆;张敬谊;陈诚;张立鹏;顾春华;何高奇;;计算机应用与软件(07);第116-118、184页 *

Also Published As

Publication number Publication date
CN116624065A (en) 2023-08-22

Similar Documents

Publication Publication Date Title
CN116624065B (en) Automatic folding regulation and control method for intelligent doors and windows
CN105956618B (en) Converter steelmaking blowing state identification system and method based on image dynamic and static characteristics
CN110826514A (en) Construction site violation intelligent identification method based on deep learning
CN112087443B (en) Sensing data anomaly detection method under physical attack of industrial sensing network information
US20220327676A1 (en) Method and system for detecting change to structure by using drone
CN113112151B (en) Intelligent wind control evaluation method and system based on multidimensional sensing and enterprise data quantification
CN109657580B (en) Urban rail transit gate traffic control method
US20230057878A1 (en) Industrial internet of things, control methods and storage medium based on machine visual detection
CN116434266B (en) Automatic extraction and analysis method for data information of medical examination list
CN116092014A (en) Intelligent inspection control method and system based on Internet of things equipment
CN116664846B (en) Method and system for realizing 3D printing bridge deck construction quality monitoring based on semantic segmentation
CN115958609B (en) Instruction data safety early warning method based on intelligent robot automatic control system
CN117011280A (en) 3D printed concrete wall quality monitoring method and system based on point cloud segmentation
CN115880620B (en) Personnel counting method applied to cart early warning system
US20210374480A1 (en) Arithmetic device, arithmetic method, program, and discrimination system
CN115861364A (en) AI identification-based field personnel management and control method and system
US20230125890A1 (en) Image analysis system, image analysis method, and image analysis program
CN115690514A (en) Image recognition method and related equipment
Wassantachat et al. Traffic density estimation with on-line SVM classifier
CN111402444B (en) Integrated machine room operation and maintenance management system
CN117079197B (en) Intelligent building site management method and system
CN113591705A (en) Inspection robot instrument recognition system and method and storage medium
KR102520218B1 (en) System and Method for improving hardware usage in control server using artificial intelligence image processing
CN113435656B (en) Project progress visual management method and system
CN117496426A (en) Precast beam procedure identification method and device based on mutual learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant