CN115512306A - Method for early warning of violence events in elevator based on image processing - Google Patents

Method for early warning of violence events in elevator based on image processing Download PDF

Info

Publication number
CN115512306A
CN115512306A CN202211424716.5A CN202211424716A CN115512306A CN 115512306 A CN115512306 A CN 115512306A CN 202211424716 A CN202211424716 A CN 202211424716A CN 115512306 A CN115512306 A CN 115512306A
Authority
CN
China
Prior art keywords
gain
detection
sample images
images
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211424716.5A
Other languages
Chinese (zh)
Other versions
CN115512306B (en
Inventor
黄剑
周旭东
张记复
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Ruitong Technology Co ltd
Original Assignee
Chengdu Ruitong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Ruitong Technology Co ltd filed Critical Chengdu Ruitong Technology Co ltd
Priority to CN202211424716.5A priority Critical patent/CN115512306B/en
Publication of CN115512306A publication Critical patent/CN115512306A/en
Application granted granted Critical
Publication of CN115512306B publication Critical patent/CN115512306B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B50/00Energy efficient technologies in elevators, escalators and moving walkways, e.g. energy saving or recuperation technologies

Abstract

The invention provides a method for early warning a violent incident in an elevator based on image processing, which comprises the following steps: the method comprises the steps that a camera obtains real-time images in an elevator, the real-time images are divided into image sets according to a set period, a plurality of sample images are selected from each image set according to a set rule and are input into a detection matrix, each detection unit carries out gray detection on the input sample images, when a gray value interval is smaller than a set value, the sample images are marked and input into a gain matrix, gain units arranged in the gain matrix correspondingly carry out step-by-step gain on the sample images according to set unit quantity, after the gain is finished, edge enhancement processing is carried out, after the edge enhancement processing is finished, the sample images are input into an analysis matrix, each analysis unit in the analysis matrix is used for loading a reference image library to analyze whether the sample images contain danger factors, and if the sample images contain the danger factors, early warning information is sent to a detection center.

Description

Method for early warning of violence incident in elevator based on image processing
Technical Field
The invention relates to the technical field of image processing, in particular to a method for early warning a violence event in an elevator based on image processing.
Background
General elevator control all is transmission to the surveillance center after gathering real-time image, and the surveillance center arranges the special messenger and looks over in real time, because artificial control has very big randomness, and can not keep being absorbed in, therefore a lot of violence incident often by artificial neglect. In the prior art, for example, publication numbers are: "CN105000443A" discloses a method for early warning of violence incident in elevator based on image processing, which comprises that the image processing device is connected with the high-definition camera device and the mobile hard disk to receive the multi-frame real-time elevator image respectively, and the image processing device comprises an edge enhancer device, a median filter sub-device, a graying processing sub-device, a target identification sub-device and a data extraction sub-device; the edge enhancer device, the median filtering sub-device, the graying processing sub-device, the target identification sub-device and the data extraction sub-device are jointly used for executing scene data extraction processing on each frame of real-time elevator image: the edge enhancement device is connected with the high-definition camera device and is used for performing edge enhancement processing on each frame of real-time elevator image to obtain each frame of enhanced image; the median filtering sub-device is connected with the edge enhancer device and is used for performing median filtering processing of a 3 x 3 pixel filtering window on each frame of enhanced image to obtain each frame of filtered image; the graying processing sub-equipment is connected with the median filtering sub-equipment and is used for executing graying processing on each frame of filtering image so as to obtain a grayed car image of each frame; the target identification sub-equipment is respectively connected with the graying processing sub-equipment and the mobile hard disk, and pixels of which the gray values are between the upper limit threshold value and the lower limit threshold value of the human body gray in each frame of graying car image form a plurality of target gray sub-images; the data extraction sub-device is connected with the target identification sub-device, and the area of each target, the relative distance between every two targets and the centroid position of each target are calculated based on the plurality of target gray level sub-images; the embedded processor is connected with the data extraction sub-equipment and the mobile hard disk respectively, calculates the real-time area change rate average value of each target, the real-time relative distance change rate average value of each two targets and the real-time motion track of each target based on the area of each target, the relative distance between each two targets and the centroid position of each target corresponding to each frame of real-time elevator image, and determines that a violent implementer approaches a scene of a violent implemented person and sends out a violent event early warning signal when the real-time area change rate average value of all the targets is smaller than the area change rate threshold value, the real-time relative distance change rate average value of all the two targets is between zero and the distance change rate threshold value, and the real-time motion tracks of all the targets are matched with a certain regular motion track template of the standard track template set.
The above technology adopts the edge enhancement processing to each frame of image, and the edge enhancement processing is a technical method for emphasizing the edge with larger difference of the brightness values of the adjacent pixels of the image. The image after edge enhancement can more clearly display the boundaries of different object types or phenomena or the traces of linear images, so as to facilitate the identification of different object types and the delineation of the distribution range thereof. But this enhancement technique does not accurately conclude that something is fine and neglects it. Such as the same cylinder, chopsticks and screwdriver, all have the same result after the edge is enhanced when held in the hand. For finer, e.g., needles, small blades, etc., edge enhancement is directly ignored.
Still further, the above-mentioned technology is to identify the motion trajectory, and as the elevator described in the background art of this patent is narrow in space, an empress person often can perform the empress completely without walking, for example, standing by a child and pricking the child, and this action can be completed in only a moment, which results in the above-mentioned trajectory monitoring being ineffective.
Disclosure of Invention
In view of the above, the main object of the present invention is to provide a method for early warning of violence in an elevator based on image processing.
The technical scheme adopted by the invention is as follows:
a method for early warning of violence incidents in an elevator based on image processing, comprising the steps of: the method comprises the steps that a camera obtains real-time images in an elevator, the real-time images are divided into an image set according to a set period, a plurality of images are selected from each image set according to a set rule to serve as sample images, the sample images are input into a detection matrix, each detection unit in the detection matrix carries out gray level detection on the input sample images, when a gray level value interval is smaller than a set value, the sample images are marked and input into a gain matrix, gain units arranged in the gain matrix correspondingly carry out step-by-step gain on the sample images according to set unit quantity, edge enhancement processing is carried out after the gain is finished, the sample images are input into an analysis matrix after the edge enhancement processing is finished, each analysis unit in the analysis matrix is used for loading a reference image library to analyze whether the sample images contain danger factors or not, if the sample images contain the danger factors, early warning information is sent to a detection center, and meanwhile, an artificial intelligence system synchronizes the sample images containing the danger factors in a plurality of continuous period time and trains the sample images based on a neural network model to obtain the continuity and violence levels of the danger factors.
Further, the set period is 0.5-5 seconds; the real-time images in the elevator acquired by the camera in the set period are divided into a set of images.
Further, the setting rule is to execute a sampling method based on an embedded sampling program, the sampling method being a plurality of images uniformly extracted in time series from an image set as sample images.
Further, the detection matrix is composed of M × N columns of detection units, where M and N are integers greater than or equal to 2, each detection unit is configured to input sample images to the corresponding detection unit one by one in time sequence based on the performance of a detection task, a detection template and a sliding template are arranged in each detection unit, a plurality of detection areas are arranged on the detection template, a plurality of gray values of each detection area are set, the detection areas are respectively configured into a plurality of standard templates based on each gray value, and when each sample image is input to the detection template, the sliding template samples the sample images according to a set sequence of the detection areas and compares the sampled samples with the plurality of standard templates corresponding to the detection areas to obtain gray value intervals of the sampled samples relative to the standard templates.
The gain matrix is formed by M multiplied by N rows of gain units, wherein M and N are integers greater than or equal to 2, the gain units are arranged in one-to-one correspondence with the detection units, a control module is arranged between the gain matrix and the detection matrix, the control module is provided with a control part, an identification part, a gain determination part and a gain driving part, the gray value interval of the sample image acquired by the detection unit is input to the identification part, the identification part is used for identifying the feature code of the detection unit and corresponds to the gain unit based on the feature code, the gain determination part is coupled with the identification part, the gain determination part determines the gain coefficient of the sample image based on the gray value interval of the sample image, and the gain driving part is driven to gain the sample image step by step according to the set unit amount under the control of the control part, wherein the control part is respectively connected with the gain driving part, the identification part and the gain determination part.
Furthermore, the analysis matrix is composed of M × N columns of analysis units, where M and N are integers greater than or equal to 2, the analysis units and the gain units are arranged in a one-to-one correspondence manner, each analysis unit is used for loading a reference image library, and comparing the sample image in the reference image library with a set marker image one by one to analyze whether the sample image contains a risk factor.
Further, the neural network model is formed by iterative training based on images marked by the reference image library, the neural network model is provided with a marking unit and a synchronizing unit, the marking unit is used for marking the continuity and the violence level of the risk factor obtained by training the sample image, and the sample image is synchronized to the reference image library by using the synchronizing unit after marking is finished.
Further, the marked images are dangerous tools, dangerous actions and dangerous forms marked in a large number of images containing violent events by using human experts.
Further, the neural network model is used for training the sample images containing the risk factors in a plurality of continuous cycle times to obtain the persistence characteristics and the violence level characteristics of the sample images containing the risk factors.
Further, the camera shoots real-time images in the elevator according to set time.
When the gray scale detection is carried out, a detection template and a sliding template are arranged in each detection unit, a plurality of detection areas are arranged on the detection template, a plurality of gray scale values of each detection area are arranged, the detection areas are respectively configured into a plurality of standard templates based on each gray scale value, when each sample image is input to the detection template, the sliding template samples the sample image according to the set sequence of the detection areas and compares the sampled sample with the plurality of standard templates corresponding to the detection areas to obtain the gray scale value interval of the sampled sample relative to the standard templates, therefore, the sample image can be divided into a plurality of parts, such as 30-300 detection areas, when the gray scale value of each detection area is detected, if the gray scale value is lower than the set gray scale value, the gain unit in the gain matrix at the rear end is used for carrying out gain according to the set gain coefficient, edge enhancement processing is carried out after the gain is finished, and the sample image is divided into a plurality of detection areas, therefore, the sample image can identify thinner markers, such as blades, needles and the like similar objects.
The application utilizes identification of dangerous tools, dangerous actions, dangerous forms, rather than trajectories, such as identifying dangerous tools, such as sticks, knives, screwdrivers, blades, etc., dangerous actions including fist making, dangerous forms including fist making impact, etc. The application focuses on the detection of dangerous goods and dangerous behaviors and further carries out the detection of the danger persistence and the violence level through a neural network model based on the detection of the dangerous goods and the dangerous behaviors.
Drawings
The invention is illustrated and described only by way of example and not by way of limitation in the scope of the invention as set forth in the following drawings, in which:
FIG. 1 is a schematic diagram of the framework of the present invention;
FIG. 2 is a flow chart of the method of the present invention.
Detailed Description
In order to make the objects, technical solutions, design methods, and advantages of the present invention more apparent, the present invention will be further described in detail by specific embodiments with reference to the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Explanation: the risk factor is a generic term including a dangerous tool, a dangerous action, a dangerous form, and the like.
Example 1
Referring to fig. 1 and 2, the invention provides a system for early warning of a violence incident in an elevator based on image processing, comprising:
a camera for taking real-time images in the elevator at a set time, for example, taking 10 images in one second;
the image set is used for forming an image set by the real-time images of the knife collected by the camera according to a set period and based on time sequence, for example, the image set is 1 second and comprises 10 real-time images;
the sample image selection module is used for selecting a plurality of images from each image set as sample images according to a set rule; wherein the setting rule is to execute a sampling method based on an embedded sampling program, the sampling method being a plurality of images uniformly extracted in time series from an image set as sample images;
the detection matrix is composed of M multiplied by N columns of detection units, wherein M and N are integers more than or equal to 2, each detection unit is configured to input sample images to the corresponding detection unit one by one according to time sequence based on the detection task, a detection template and a sliding template are arranged in each detection unit, a plurality of detection areas are arranged on the detection template, a plurality of gray values of each detection area are set, the detection areas are respectively configured into a plurality of standard templates based on each gray value, and when each sample image is input to the detection template, the sliding template samples the sample images according to the set sequence of the detection areas and compares the sampled samples with the plurality of standard templates corresponding to the detection areas to obtain the gray value interval of the sampled samples relative to the standard templates;
the gain matrix is composed of M multiplied by N rows of gain units, wherein M and N are integers more than or equal to 2, the gain units and the detection units are arranged in a one-to-one correspondence manner, a control module is arranged between the gain matrix and the detection matrix, the control module is provided with a control part, an identification part, a gain determination part and a gain driving part, the gray value interval of the sample image acquired by the detection unit is input to the identification part, the identification part is used for identifying the feature code of the detection unit and corresponds to the gain units based on the feature code, the gain determination part is coupled with the identification part, the gain determination part determines the gain coefficient of the sample image based on the gray value interval of the sample image, and the gain driving part is driven to gain the sample image step by step according to the set unit quantity according to the gain coefficient under the control of the control part, wherein the control part is respectively connected with the gain driving part, the identification part and the gain determination part;
the analysis matrix is composed of M multiplied by N columns of analysis units, wherein M and N are integers more than or equal to 2, the analysis units and the gain units are arranged in a one-to-one correspondence mode, each analysis unit is used for loading a reference image library, and the sample images are compared with set marking images in the reference image library one by one to analyze whether danger factors are contained in the sample images;
the artificial intelligence system comprises a neural network model, wherein the neural network model is formed by iterative training based on images marked by a reference image library, the neural network model is provided with a marking unit and a synchronization unit, the marking unit is used for marking the persistence and the violence level of a risk factor obtained by training a sample image, and the synchronization unit is used for synchronizing the sample image to the reference image library after marking is completed; further, the marked images are dangerous tools, dangerous actions and dangerous forms marked in a large number of images containing violent incidents by using human experts; further, the neural network model is used for training the sample images containing the risk factors in a plurality of continuous cycle times to obtain the persistence characteristics and the violence level characteristics of the sample images containing the risk factors.
When the gray scale detection is carried out, a detection template and a sliding template are arranged in each detection unit, a plurality of detection areas are arranged on the detection template, a plurality of gray scale values of each detection area are arranged, the detection areas are respectively configured into a plurality of standard templates based on each gray scale value, when each sample image is input to the detection template, the sample image is sampled according to the set sequence of the detection areas by the sliding template, the sampled sample is compared with the plurality of standard templates corresponding to the detection areas, so as to obtain the gray scale value interval of the sampled sample relative to the standard templates, therefore, by using the method of the application, the sample image can be divided into a plurality of parts, such as 30-300 detection areas, when the gray scale value of each detection area is detected, if the gray scale value is lower than the set gray scale value, the gain unit in the gain matrix at the rear end is used for gaining according to the set gain coefficient, after the gaining is finished, the edge enhancement processing is carried out, and the sample image is divided into a plurality of detection areas, so that the sample image can identify more tiny markers, such as blades, needles and the like.
The application utilizes identification of dangerous tools, dangerous motions including fist making, dangerous forms including fist making impact, and dangerous forms, rather than motion trajectories, such as tools that identify danger, such as sticks, knives, screwdrivers, blades, and the like. The application focuses on the detection of dangerous goods and dangerous behaviors, and the detection of the dangerous persistence and the violence level is carried out through a neural network model based on the dangerous goods and the dangerous behaviors.
Example 2
Referring to fig. 1, the present invention provides a method for early warning of a violent incident in an elevator based on image processing, comprising the steps of: the method comprises the steps that a camera obtains real-time images in an elevator, the real-time images are divided into an image set according to a set period, a plurality of images are selected from each image set according to a set rule to serve as sample images, the sample images are input into a detection matrix, each detection unit in the detection matrix carries out gray level detection on the input sample images, when a gray level value interval is smaller than a set value, the sample images are marked and input into a gain matrix, gain units arranged in the gain matrix correspondingly carry out step-by-step gain on the sample images according to set unit quantity, edge enhancement processing is carried out after the gain is finished, the sample images are input into an analysis matrix after the edge enhancement processing is finished, each analysis unit in the analysis matrix is used for loading a reference image library to analyze whether the sample images contain danger factors or not, if the sample images contain the danger factors, early warning information is sent to a detection center, and meanwhile, an artificial intelligence system synchronizes the sample images containing the danger factors in a plurality of continuous period time and trains the sample images based on a neural network model to obtain the continuity and violence levels of the danger factors.
In the above, the set period is 0.5 to 5 seconds; the real-time images in the elevator acquired by the camera in the set period are divided into a set of images.
In the above, the setting rule is to execute a sampling method based on an embedded sampling program, the sampling method being a plurality of images uniformly extracted in time series from an image set as sample images.
In the above, the detection matrix is formed by M × N columns of detection units, where M and N are integers greater than or equal to 2, each detection unit is configured to input sample images to the corresponding detection unit one by one in time sequence based on the performance of a detection task, a detection template and a sliding template are disposed in each detection unit, a plurality of detection regions are disposed on the detection template, a plurality of gray values of each detection region are set, the detection regions are respectively configured into a plurality of standard templates based on each gray value, and when each sample image is input to the detection template, the sliding template samples the sample images according to a set sequence of the detection regions and compares the sampled samples with the plurality of standard templates corresponding to the detection regions to obtain gray value intervals of the sampled samples relative to the standard templates.
In the foregoing, the gain matrix is formed by M × N rows of gain units, where M and N are integers greater than or equal to 2, the gain units are disposed in one-to-one correspondence with the detection units, a control module is disposed between the gain matrix and the detection matrix, the control module has a control portion, an identification portion, a gain determination portion, and a gain driving portion, the gray scale value section of the sample image acquired by the detection unit is input to the identification portion, the identification portion is used for identifying a feature code of the detection unit, and the feature code corresponds to the gain unit, the gain determination portion is coupled to the identification portion, the gain determination portion determines a gain coefficient of the sample image based on the gray scale value section of the sample image, and the gain driving portion is driven to perform stepwise gain on the sample image according to a set unit amount according to the gain coefficient under the control of the control portion, and the control portion is respectively connected to the gain driving portion, the identification portion, and the gain determination portion.
In the above, the analysis matrix is formed by M × N rows of analysis units, where M and N are integers greater than or equal to 2, the analysis units and the gain units are arranged in a one-to-one correspondence, each analysis unit is configured to load a reference image library, and compare the sample image with a set marker image in the reference image library one by one to analyze whether the sample image contains a risk factor.
In the above, the neural network model is formed by iterative training based on images marked by the reference image library, the neural network model is provided with a marking unit and a synchronization unit, the marking unit is used for marking the persistence and the violence level of a risk factor obtained by training a sample image, and the synchronization unit is used for synchronizing the sample image to the reference image library after marking is completed.
In the above, the marked images are dangerous tools, dangerous actions, and dangerous forms marked by human experts in a large number of images including violent incidents.
In the above, the neural network model is used for training the sample images containing the risk factors in a plurality of continuous cycle times to obtain the persistence characteristic and the violence level characteristic of the sample images containing the risk factors.
In the above, the camera takes a real-time image of the inside of the elevator at a set time.
When the gray scale detection is carried out, a detection template and a sliding template are arranged in each detection unit, a plurality of detection areas are arranged on the detection template, a plurality of gray scale values of each detection area are arranged, the detection areas are respectively configured into a plurality of standard templates based on each gray scale value, when each sample image is input to the detection template, the sliding template samples the sample image according to the set sequence of the detection areas and compares the sampled sample with the plurality of standard templates corresponding to the detection areas to obtain the gray scale value interval of the sampled sample relative to the standard templates, therefore, the sample image can be divided into a plurality of parts, such as 30-300 detection areas, when the gray scale value of each detection area is detected, if the gray scale value is lower than the set gray scale value, the gain unit in the gain matrix at the rear end is used for carrying out gain according to the set gain coefficient, edge enhancement processing is carried out after the gain is finished, and the sample image is divided into a plurality of detection areas, therefore, the sample image can identify thinner markers, such as blades, needles and the like similar objects.
The application utilizes identification of dangerous tools, dangerous actions, dangerous forms, rather than trajectories, such as identifying dangerous tools, such as sticks, knives, screwdrivers, blades, etc., dangerous actions including fist making, dangerous forms including fist making impact, etc. The application focuses on the detection of dangerous goods and dangerous behaviors, and the detection of the dangerous persistence and the violence level is carried out through a neural network model based on the dangerous goods and the dangerous behaviors.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. Method for early warning of violence incidents in an elevator based on image processing, characterized by comprising the following steps: the method comprises the steps that a camera obtains real-time images in an elevator, the real-time images are divided into an image set according to a set period, a plurality of images are selected from each image set according to a set rule to serve as sample images, the sample images are input into a detection matrix, each detection unit in the detection matrix carries out gray level detection on the input sample images, when a gray level value interval is smaller than a set value, the sample images are marked and input into a gain matrix, gain units arranged in the gain matrix correspondingly carry out step-by-step gain on the sample images according to set unit quantity, edge enhancement processing is carried out after the gain is finished, the sample images are input into an analysis matrix after the edge enhancement processing is finished, each analysis unit in the analysis matrix is used for loading a reference image library to analyze whether the sample images contain danger factors or not, if the sample images contain the danger factors, early warning information is sent to a detection center, and meanwhile, an artificial intelligence system synchronizes the sample images containing the danger factors in a plurality of continuous period time and trains the sample images based on a neural network model to obtain the continuity and violence levels of the danger factors.
2. The method for early warning of violence incident in elevator based on image processing as claimed in claim 1, wherein the set period is 0.5-5 seconds; the real-time images in the elevator acquired by the camera in the set period are divided into a set of images.
3. The method for warning of violence in elevator based on image processing as claimed in claim 1, wherein the set rule is based on embedded sampling program to execute sampling method of uniformly sampling a plurality of images in time sequence from image set as sample images.
4. The method of claim 1, wherein the detection matrix comprises M × N rows of detection units, wherein M and N are integers greater than or equal to 2, each detection unit is configured to input sample images to the corresponding detection unit one by one in time sequence based on the performance of detection tasks, a detection template and a sliding template are disposed in each detection unit, a plurality of detection areas are disposed on the detection template, a plurality of gray values of each detection area are set, the detection areas are configured into a plurality of standard templates based on each gray value, and when each sample image is input to the detection template, the sliding template samples the sample images in the set sequence of the detection areas and compares the sampled samples with the plurality of standard templates corresponding to the detection areas to obtain gray value intervals of the sampled samples relative to the standard templates.
5. The method for early warning of violence incidents in elevators based on image processing as claimed in claim 1, wherein the gain matrix is composed of M × N rows of gain cells, where M and N are integers greater than or equal to 2, the gain cells are arranged in one-to-one correspondence with the detection cells, a control module is arranged between the gain matrix and the detection matrix, the control module has a control portion, an identification portion, a gain determination portion, and a gain driving portion, the gray scale value interval of the sample image acquired by the detection cells is input to the identification portion, the identification portion is used for identifying the feature code of the detection cells and forming correspondence with the gain cells based on the feature code, the gain determination portion is coupled with the identification portion, the gain determination portion determines the gain coefficient of the sample image based on the gray scale value interval of the sample image, the gain driving portion is used for driving the gain driving portion to gain the sample image in a stepwise manner according to the set unit amount under the control of the control portion, and edge enhancement processing is performed after the gain is completed; the control part is respectively connected with the gain driving part, the identification part and the gain determination part.
6. The method for early warning of violence incidents in elevators based on image processing as claimed in claim 1, wherein the analysis matrix is composed of M × N columns of analysis units, where M and N are integers greater than or equal to 2, the analysis units are disposed in one-to-one correspondence with gain units, each of the analysis units is used for loading a reference image library, and the sample images are compared with the set marker images in the reference image library one-to-one to analyze whether the sample images contain danger factors.
7. The method for early warning of violence incidents in elevators based on image processing as claimed in claim 1, wherein the neural network model is formed by iterative training based on images marked by a reference image library, the neural network model is provided with a marking unit and a synchronization unit, the marking unit is used for marking the persistence and violence levels of risk factors obtained by training the sample images, and the synchronization unit is used for synchronizing the sample images to the reference image library after marking.
8. The method for early warning of violence incident in elevator based on image processing as claimed in claim 6, wherein the marked image is a dangerous tool, dangerous action, dangerous form marked in a large number of images containing violence incident by human expert.
9. The method for early warning of violence incident in elevator based on image processing as claimed in claim 1 or 7, wherein the neural network model is used for training sample images containing danger factors in a plurality of continuous cycle times to obtain the persistence characteristic and the violence level characteristic of the sample images containing danger factors.
10. The method for early warning of violence incident in elevator based on image processing as claimed in claim 1, wherein the camera takes real-time image shooting in elevator according to set time.
CN202211424716.5A 2022-11-15 2022-11-15 Method for early warning violent event in elevator based on image processing Active CN115512306B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211424716.5A CN115512306B (en) 2022-11-15 2022-11-15 Method for early warning violent event in elevator based on image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211424716.5A CN115512306B (en) 2022-11-15 2022-11-15 Method for early warning violent event in elevator based on image processing

Publications (2)

Publication Number Publication Date
CN115512306A true CN115512306A (en) 2022-12-23
CN115512306B CN115512306B (en) 2023-04-25

Family

ID=84514311

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211424716.5A Active CN115512306B (en) 2022-11-15 2022-11-15 Method for early warning violent event in elevator based on image processing

Country Status (1)

Country Link
CN (1) CN115512306B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115973865A (en) * 2023-01-30 2023-04-18 成都睿瞳科技有限责任公司 Elevator stop-start control method based on image detection

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1400917A1 (en) * 1998-09-24 2004-03-24 Qinetiq Limited improvements relating to pattern recognition and other inventions
US20070262985A1 (en) * 2006-05-08 2007-11-15 Tatsumi Watanabe Image processing device, image processing method, program, storage medium and integrated circuit
CN104691559A (en) * 2015-03-24 2015-06-10 江苏科技大学 Auxiliary system for monitoring safety of region between platform safety door and vehicle door and realization method of auxiliary system
CN105000443A (en) * 2015-08-04 2015-10-28 董岩 Method for early warning of violent incidents in elevator based on image processing
CN105016163A (en) * 2015-08-02 2015-11-04 何国梁 Elevator interior corner violence alarm platform based on wireless communication
CN105060052A (en) * 2015-08-04 2015-11-18 董岩 In-elevator violence incident early-warning system based on image processing
CN109523479A (en) * 2018-11-10 2019-03-26 东莞理工学院 A kind of bridge pier surface gaps visible detection method
US20190370609A1 (en) * 2016-12-16 2019-12-05 Clarion Co., Ltd. Image processing apparatus and external environment recognition apparatus
CN112396637A (en) * 2021-01-19 2021-02-23 南京野果信息技术有限公司 Dynamic behavior identification method and system based on 3D neural network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1400917A1 (en) * 1998-09-24 2004-03-24 Qinetiq Limited improvements relating to pattern recognition and other inventions
US20070262985A1 (en) * 2006-05-08 2007-11-15 Tatsumi Watanabe Image processing device, image processing method, program, storage medium and integrated circuit
CN104691559A (en) * 2015-03-24 2015-06-10 江苏科技大学 Auxiliary system for monitoring safety of region between platform safety door and vehicle door and realization method of auxiliary system
CN105016163A (en) * 2015-08-02 2015-11-04 何国梁 Elevator interior corner violence alarm platform based on wireless communication
CN105000443A (en) * 2015-08-04 2015-10-28 董岩 Method for early warning of violent incidents in elevator based on image processing
CN105060052A (en) * 2015-08-04 2015-11-18 董岩 In-elevator violence incident early-warning system based on image processing
US20190370609A1 (en) * 2016-12-16 2019-12-05 Clarion Co., Ltd. Image processing apparatus and external environment recognition apparatus
CN109523479A (en) * 2018-11-10 2019-03-26 东莞理工学院 A kind of bridge pier surface gaps visible detection method
CN112396637A (en) * 2021-01-19 2021-02-23 南京野果信息技术有限公司 Dynamic behavior identification method and system based on 3D neural network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115973865A (en) * 2023-01-30 2023-04-18 成都睿瞳科技有限责任公司 Elevator stop-start control method based on image detection
CN115973865B (en) * 2023-01-30 2023-10-20 成都睿瞳科技有限责任公司 Elevator stop-start control method based on image detection

Also Published As

Publication number Publication date
CN115512306B (en) 2023-04-25

Similar Documents

Publication Publication Date Title
Ghosh et al. Real-time object recognition and orientation estimation using an event-based camera and CNN
CN103810717B (en) A kind of human body behavioral value method and device
CN101253535B (en) Image retrieving apparatus and image search method
CN110991406B (en) RSVP electroencephalogram characteristic-based small target detection method and system
US20170109879A1 (en) Computer-implemented methods, computer-readable media, and systems for tracking a plurality of spermatozoa
EP1560161B1 (en) Method and system for searching for events in video surveillance
CN110363131B (en) Abnormal behavior detection method, system and medium based on human skeleton
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN110070029B (en) Gait recognition method and device
CN109934127B (en) Pedestrian identification and tracking method based on video image and wireless signal
CN107292252A (en) A kind of personal identification method of autonomous learning
CN109359536A (en) Passenger behavior monitoring method based on machine vision
CN104392468A (en) Improved visual background extraction based movement target detection method
CN101719216A (en) Movement human abnormal behavior identification method based on template matching
CN109215010B (en) Image quality judgment method and robot face recognition system
CN110032932B (en) Human body posture identification method based on video processing and decision tree set threshold
CN111505632A (en) Ultra-wideband radar action attitude identification method based on power spectrum and Doppler characteristics
CN115512306A (en) Method for early warning of violence events in elevator based on image processing
CN115661698A (en) Escalator passenger abnormal behavior detection method, system, electronic device and storage medium
CN106056078A (en) Crowd density estimation method based on multi-feature regression ensemble learning
CN111860117A (en) Human behavior recognition method based on deep learning
KR20190050551A (en) Apparatus and method for recognizing body motion based on depth map information
CN108920699B (en) Target identification feedback system and method based on N2pc
CN106548195A (en) A kind of object detection method based on modified model HOG ULBP feature operators
CN108491796A (en) A kind of time domain period point target detecting method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant