CN110930657A - Danger warning method and related equipment - Google Patents

Danger warning method and related equipment Download PDF

Info

Publication number
CN110930657A
CN110930657A CN201911217042.XA CN201911217042A CN110930657A CN 110930657 A CN110930657 A CN 110930657A CN 201911217042 A CN201911217042 A CN 201911217042A CN 110930657 A CN110930657 A CN 110930657A
Authority
CN
China
Prior art keywords
images
heavy machinery
video information
sampling
level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911217042.XA
Other languages
Chinese (zh)
Inventor
田岱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wanyi Technology Co Ltd
Original Assignee
Wanyi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wanyi Technology Co Ltd filed Critical Wanyi Technology Co Ltd
Priority to CN201911217042.XA priority Critical patent/CN110930657A/en
Publication of CN110930657A publication Critical patent/CN110930657A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Alarm Systems (AREA)

Abstract

The application discloses a danger warning method, which is applied to electronic equipment and comprises the following steps: acquiring first video information of a first area, wherein the first area is a monitoring area of a camera; sampling the first video information to obtain N first images, wherein N is a positive integer; carrying out image recognition on the N first images to obtain a first recognition result; determining the danger level of the heavy machinery under the condition that the first identification result is that the heavy machinery exists in the monitored area; and controlling audio output equipment to output warning information corresponding to the danger level, wherein the warning information is used for prompting to be far away from the monitoring area. By adopting the embodiment of the application, safety accidents caused by heavy machinery can be reduced.

Description

Danger warning method and related equipment
Technical Field
The present application relates to the field of electronic technologies, and in particular, to a method and a related device for warning a danger.
Background
At present, the vigorous development of real estate business enables construction sites to emerge continuously. In the process of construction on a building site, because most engineering machines are large in size, a driver can have a visual blind spot in the process of utilizing the engineering machines to execute operation, and safety accidents are easily caused.
Disclosure of Invention
The embodiment of the application provides a danger warning method and related equipment, which are used for reducing the occurrence probability of safety accidents.
In a first aspect, an embodiment of the present application provides a danger warning method applied to an electronic device, where the method includes:
acquiring first video information of a first area, wherein the first area is a monitoring area of a camera;
sampling the first video information to obtain N first images, wherein N is a positive integer;
carrying out image recognition on the N first images to obtain a recognition result;
determining the danger level of the heavy machinery under the condition that the first identification result is that the heavy machinery exists in the monitored area;
and controlling audio output equipment to output warning information corresponding to the danger level, wherein the warning information is used for prompting to be far away from the monitoring area.
In a second aspect, an embodiment of the present application provides an apparatus, including:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring first video information of a first area, and the first area is a monitoring area of a camera;
the sampling unit is used for sampling the first video information to obtain N first images, wherein N is a positive integer;
the recognition unit is used for carrying out image recognition on the N first images to obtain a recognition result;
a determining unit, configured to determine a danger level of the heavy machinery if the first identification result is that the heavy machinery exists in the monitored area;
and the output unit is used for controlling audio output equipment to output warning information corresponding to the danger level, and the warning information is used for prompting to be far away from the monitoring area.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing steps in the method according to the first aspect of the embodiment of the present application.
In a fourth aspect, the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform some or all of the steps described in the method according to the first aspect of the present application.
In a fifth aspect, the present application provides a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps described in the method according to the first aspect of the present application. The computer program product may be a software installation package.
It can be seen that, in the embodiment of the application, the electronic device first acquires the first video information of the first area through the camera, then samples the first video information to obtain N first images, then performs image recognition on the N first images, determines the danger level of the heavy machinery when the recognition result indicates that the heavy machinery exists in the monitored area, and finally outputs corresponding warning information according to the danger level.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1A is a schematic structural diagram of a hazard warning system according to an embodiment of the present disclosure;
fig. 1B is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a method for warning danger according to an embodiment of the present disclosure;
fig. 3 is a schematic flow chart of another hazard warning method according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of another electronic device provided in the embodiment of the present application;
fig. 5 is a schematic structural diagram of a hazard warning device according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The following are detailed below.
The terms "first," "second," "third," and "fourth," etc. in the description and claims of this application and in the accompanying drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
Hereinafter, some terms in the present application are explained to facilitate understanding by those skilled in the art.
Referring to fig. 1A, fig. 1A is a danger warning system according to an embodiment of the present disclosure, which includes an electronic device, a user device and a heavy machine, where the electronic device is used for monitoring the heavy machine, and the user device is associated with the electronic device. The electronic equipment, user equipment, and heavy machinery shown in FIG. 1A are for example only and do not constitute a limitation on the embodiments of the present application.
The heavy machinery comprises other heavy machinery such as an excavator, a tractor, a road roller and a shovel transport machinery.
Electronic devices may include various handheld devices, vehicle mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem with wireless communication capabilities, as well as various forms of User Equipment (UE), Mobile Stations (MS), terminal equipment (terminal device), and so forth.
As shown in fig. 1B, fig. 1B is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device includes a processor, a Memory, a signal processor, a transceiver, a display screen, a speaker, an audio output module, a communication interface, a Random Access Memory (RAM), a camera, a sensor, and the like. The storage, the signal processor, the display screen, the loudspeaker, the audio output module, the RAM, the camera, the sensor and the communication interface are connected with the processor, and the transceiver is connected with the signal processor.
The Display screen may be a Liquid Crystal Display (LCD), an Organic or inorganic Light-Emitting Diode (OLED), an active matrix Organic Light-Emitting Diode (AMOLED), or the like.
The camera may be a common camera, an infrared camera, or an intelligent camera, and is not limited herein. The camera may be a front camera or a rear camera, and is not limited herein.
Wherein the sensor comprises at least one of: light-sensitive sensors, gyroscopes, infrared proximity sensors, fingerprint sensors, pressure sensors, etc. Among them, the light sensor, also called an ambient light sensor, is used to detect the ambient light brightness. The light sensor may include a light sensitive element and an analog to digital converter. The photosensitive element is used for converting collected optical signals into electric signals, and the analog-to-digital converter is used for converting the electric signals into digital signals. Optionally, the light sensor may further include a signal amplifier, and the signal amplifier may amplify the electrical signal converted by the photosensitive element and output the amplified electrical signal to the analog-to-digital converter. The photosensitive element may include at least one of a photodiode, a phototransistor, a photoresistor, and a silicon photocell.
The processor is a control center of the electronic equipment, various interfaces and lines are used for connecting all parts of the whole electronic equipment, and various functions and processing data of the electronic equipment are executed by operating or executing software programs and/or modules stored in the memory and calling data stored in the memory, so that the electronic equipment is monitored integrally.
The processor may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor.
The memory is used for storing software programs and/or modules, and the processor executes various functional applications and data processing of the electronic equipment by operating the software programs and/or modules stored in the memory. The memory mainly comprises a program storage area and a data storage area, wherein the program storage area can store an operating system, a software program required by at least one function and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The audio output device may be, for example, a speaker or other devices capable of outputting audio.
The following describes embodiments of the present application in detail.
As shown in fig. 2, a danger warning method provided in an embodiment of the present application is applied to the electronic device, and specifically includes the following steps:
step 201: the method comprises the steps of obtaining first video information of a first area, wherein the first area is a monitoring area of a camera.
The duration of the first video information may be fixed or random, and may be, for example, 5min, 7min, 8min, and the like.
The first video information may be video information acquired in a first time period or video information acquired in a second time period, the termination time of the second time period is the current time, and the termination time of the first time period is earlier than the current time.
The monitoring time of the camera to the first area is within a preset time period, and the preset time period may be an operating time period (for example, 8:30am to 12:00am and 14:00pm to 19:00pm), a non-operating time period (for example, 10:00pm to 7: 00am and 12:00am to 14:00pm), or a set fixed time period (for example, 7:30am to 20:30pm and 8:20am to 21:00pm), which is not limited herein.
Step 202: and sampling the first video information to obtain N first images, wherein N is a positive integer.
Wherein the sampling of the first video information may be sampling according to a sampling interval or may be randomly sampling.
Step 203: and carrying out image recognition on the N first images to obtain a first recognition result.
Wherein, before performing step 203, the method further comprises:
the electronic equipment determines the resolution of N first images; if a first image with the resolution smaller than or equal to a fourth threshold exists in the N first images, performing image processing on the first image smaller than or equal to the fourth threshold, wherein the resolution of the processed first image smaller than or equal to the fourth threshold is larger than the fourth threshold.
And the electronic equipment can perform image processing on the first image which is less than or equal to the fourth threshold value by using an image enhancement and restoration method. The fourth threshold may be, for example, 300 pixels/ft, 350 pixels/ft, 400 pixels/ft, or other values.
Therefore, before image recognition, the image information is processed to improve the resolution of the image information, and further improve the accuracy of recognition.
Step 204: and determining the danger level of the heavy machinery under the condition that the first identification result is that the heavy machinery exists in the monitored area.
The heavy machine may be a heavy machine applied to a construction site, or a heavy machine applied to agriculture.
Wherein, the danger level can be divided according to heavy machinery's danger degree, also can divide according to heavy machinery's volume size, also can divide according to heavy machinery danger degree and volume size simultaneously.
Step 205: and controlling audio output equipment to output warning information corresponding to the danger level, wherein the warning information is used for prompting to be far away from the monitoring area.
The warning information corresponding to different danger levels may be the same or different.
The warning information corresponding to different danger levels may be one or more.
The volume of the warning information corresponding to the danger level output by the audio output device can be the same or different.
Optionally, according to the danger level, the audio output device outputs warning information corresponding to the danger level by using the corresponding volume.
Wherein the risk level is currently determined. Specifically, the volume of the audio output device is determined based on the danger level and the first mapping relationship. The first mapping is shown in table 1, and it can be seen from table 1 that the volume of the audio output device is determined according to different danger levels.
TABLE 1
Hazard class Volume of audio output device
First danger class 75 decibel
Second danger class 60 decibel
Third Risk level 45 decibel
…… ……
It can be seen that, in the embodiment of the application, the electronic device first acquires the first video information of the first area through the camera, then samples the first video information to obtain N first images, then performs image recognition on the N first images, determines the danger level of the heavy machinery when the recognition result indicates that the heavy machinery exists in the monitored area, and finally outputs corresponding warning information according to the danger level.
In an implementation manner of the present application, the sampling the first video information to obtain N first images includes:
determining a sampling interval, and sampling the first video information based on the sampling interval to obtain the N first images.
Wherein the sampling interval includes a time length interval, a number of video frames interval, and a random sampling interval.
The first video information is sampled according to the time length interval, the first video information can be sampled according to a uniform time length interval, can be sampled according to an equal difference time length interval, can be sampled according to an equal ratio time length interval, and the time of the first sampling can be fixed or randomly selected.
The sampling of the first video information is performed according to the number interval of the video frames, which may be performed according to the number interval of uniform video frames, or according to the number interval of the video frames with equal difference, or according to the number interval of the video frames with equal ratio, and the video frames sampled for the first time may be fixed, or may be randomly selected.
For example, the duration of the first video information is 4min, the first video information is sampled at uniform time length intervals, and the sampling time length intervals are 30s, so that 8 first images are obtained.
For example, the duration of the first video information is 4min, the first video information is sampled at equal-difference time length intervals, the sampling tolerance is 40s, the time of the first sampling is 2min, and 6 first images can be obtained by sampling at the time of 2min, 2.4min, 2,8min, 3.2min, 3.6min, and 4 min.
For example, the duration of the first video information is 15min, the first video information is sampled at equal time length intervals, the sampling equal ratio value is 40s, the time of the first sampling is 2min, and then 2 first images can be obtained by sampling at the time of 2min and 10 min.
Alternatively, if the duration of the first video information is greater than or equal to the first fixed value, the sampling may be performed in an equal ratio time length interval manner, and if the duration of the first video information is less than the first fixed value, the sampling may be performed in a uniform interval manner or an equal difference time length interval manner.
For example, the first video information includes 30 frames of images, and the first video information is sampled at uniform intervals of the number of video frames, where the interval of the number of video frames is 2, so that 15 first images can be obtained.
For example, the first video information includes 20 frames of images, the first video information is sampled at equal intervals of the number of video frames with equal difference, the sampling tolerance is 4 frames, the video frame sampled for the first time is the 2 nd frame, that is, the 2 nd frame, the 6 th frame, the 10 th frame, the 14 th frame and the 18 th frame are sampled, and 5 first images are obtained.
For example, the first video information includes 40 frames of images, the first video information is sampled at equal intervals of the number of video frames, the sampling equal ratio value is 4, the video frame sampled for the first time is the 2 nd frame, that is, the 2 nd frame, the 8 th frame and the 32 th frame are sampled, and 3 first images are obtained.
Alternatively, if the number of frames of the first video information is greater than the second fixed value, the sampling may be performed in a manner of equal ratio video frame number interval, and if the duration of the first video information is less than the second fixed value, the sampling may be performed in a manner of uniform interval or equal difference video frame number interval.
It can be seen that, in the embodiment of the present application, the sampling interval is determined first, and then the first video information is sampled to obtain N first images, so that the complexity of the operation is reduced.
In an implementation manner of the present application, the performing image recognition on the N first images to obtain a first recognition result includes:
respectively carrying out mechanical tool identification on the N first images to obtain M mechanical tool images, wherein M is less than or equal to N;
matching the M mechanical tool images with K mechanical tool templates respectively to obtain a first identification result, wherein different mechanical tool templates correspond to different mechanical tool types, and K is a positive integer;
if the M mechanical tool images are not matched with the K mechanical tool templates, the first identification result indicates that no heavy machinery exists in the monitored area; and if at least one of the M mechanical tool images is matched with at least one of the K mechanical tool templates, the first identification result indicates that the heavy machinery exists in the monitored area.
It can be seen that, in the embodiment of the application, first, M pieces of first images are identified to obtain M pieces of machine tool images, and then, the M pieces of machine tool images are matched with K pieces of machine templates to determine whether heavy machinery exists in a monitored area, which is beneficial to improving the speed of determining the heavy machinery.
In an implementation of the present application, the determining the hazard level of the heavy machinery includes:
determining a first risk level of the heavy machinery and a first volume of the heavy machinery;
determining the hazard level based on the first hazard level and the first volume.
Wherein the first risk level may be related to the type of heavy machinery.
Wherein the type of heavy machinery is currently determined. Specifically, the first risk level corresponding to the heavy machinery is determined based on the second mapping relation of the first risk level of the type of the heavy machinery. The second mapping table relationship is shown in table 2, and as can be seen from table 2, the first risk degrees corresponding to different heavy machinery may be the same or different.
TABLE 2
Heavy machinery First degree of risk
Lifting machine 5
Bulldozer 4
Tractor 3
…… ……
It can be seen that, in the embodiment of the present application, the first risk level is determined according to the type of the heavy machinery, and the probability of a safety accident caused by the heavy machinery can be effectively reduced.
In an implementation of the present application, the determining the risk level based on the first risk level and/or the first volume includes:
if the first risk level is greater than or equal to a first threshold, the heavy machinery is at a first risk level;
if the first risk degree is greater than or equal to a second threshold value and the first volume is greater than or equal to a third threshold value, the heavy machinery is of a second risk level, the second threshold value is smaller than the first threshold value, and the first risk level is greater than the second risk level;
if the first risk degree is smaller than the second threshold value and the first volume is smaller than the third threshold value, the heavy machinery is of a third risk level, and the second risk level is larger than the third risk level.
The first threshold may be 2, 4, 5, or 8, among other values.
Optionally, determining a risk level based on the first risk level and the first volume comprises:
an evaluation value of a risk level for evaluating a level of the risk level of the heavy machinery is determined based on a first formula, the first risk degree, and the first volume.
Determining a danger level of the heavy machinery based on the determined evaluation value.
The first formula is T ═ o × a + p × B, the T is an evaluation value, the o and the p are weights, the sum of the o and the p is equal to 1, the a is a risk level, the B is a volume, and the o may be greater than the p or smaller than the p, and is not limited herein.
Wherein determining the hazard level of the heavy machinery based on the determined evaluation value includes:
if the evaluation value is larger than or equal to the fifth threshold value, the heavy machinery is in the first danger level; if the evaluation value is larger than or equal to the sixth threshold value, the heavy machinery is in a second danger level; and if the evaluation value is greater than or equal to a seventh threshold value, the heavy machinery is in a third danger level, wherein the fifth threshold value is greater than the sixth threshold value, and the fourth sixth threshold value is greater than the seventh threshold value.
For example, if the first risk level of the heavy machine a is 5, the first volume is 32 square meters, the weight o is 0.6, p is 0.4, and the third threshold value is 12, the evaluation value is 0.6 + 5+0.4 + 32 is 15.8, and the first risk level of the heavy machine a is the first risk level.
It can be seen that, in the embodiment of the present application, the risk level is determined by the first risk degree and the first volume, and the probability of safety accidents caused by heavy machinery can be effectively reduced.
In an implementation manner of the present application, after the controlling the audio output device to output the warning information corresponding to the danger level, the method further includes:
controlling the camera to perform tracking shooting on the heavy machinery to obtain second video information;
sampling the second video information to obtain S frames of second images;
carrying out face recognition or portrait recognition on the S frame second image to obtain a second recognition result;
and when the second identification result is that people exist in the monitored area, turning up the volume of the audio output equipment, and controlling the audio output equipment to output the warning information.
The tracking shooting of the heavy machinery may adopt a centroid tracking algorithm (centrood), a correlation tracking algorithm (correlation), an edge tracking algorithm (edge), and the like, which is not limited in this application.
Wherein the sampling of the second video information may be sampling at sampling intervals or may be randomly sampling.
Wherein the sampling interval includes a time length interval, a number of video frames interval, and a random sampling interval.
The second video information is sampled according to the time length interval, the second video information can be sampled according to a uniform time length interval, can be sampled according to an equal difference time length interval, can be sampled according to an equal ratio time length interval, and the time of the first sampling can be fixed or randomly selected.
The sampling of the second video information is performed according to the number interval of the video frames, the sampling may be performed according to the number interval of the uniform video frames, the sampling may be performed according to the number interval of the video frames with equal difference, the sampling may be performed according to the number interval of the video frames with equal ratio, and the video frame sampled for the first time may be fixed or randomly selected.
The face recognition or the portrait recognition of the S frame second image comprises one or more of the following steps: and carrying out face detection, face key point positioning, face correction and face feature extraction.
It can be seen that, in the embodiment of the application, the heavy machinery is tracked and shot, the obtained second video information is sampled to obtain S second images, and the S second images are subjected to face recognition, so that the performance of the electronic equipment is improved.
In an implementation manner of the present application, after the controlling the audio output device to output the warning information corresponding to the danger level, the method further includes:
controlling the camera to perform tracking shooting on the heavy machinery to obtain third video information;
and sending the third video information to the user equipment associated with the electronic equipment.
The third video information may be video information shot at the current time, or may be video information shot at the previous time, which is not limited herein.
The electronic device may associate the user equipment with a preset keyword, or may associate the user equipment with built-in information of the user equipment (e.g., identification information of the device, information of an access MAC address).
Optionally, the distance between the person and the heavy equipment is calculated, and if the distance between the person and the heavy equipment is smaller than the first distance, the electronic device starts a call function to establish a call connection with the associated user equipment.
It can be seen that, in the embodiment of the application, the obtained third video information is sent to the associated user equipment by tracking and shooting the heavy machinery, so that the safety factor of the location of the heavy machinery is improved.
Referring to fig. 3, fig. 3 is a schematic flow chart of another hazard warning method according to an embodiment of the present application, and the method is applied to the electronic device, and specifically includes the following steps:
step 301: the method comprises the steps of obtaining first video information of a first area, wherein the first area is a monitoring area of a camera.
Step 302: determining a sampling interval, and sampling the first video information based on the sampling interval to obtain the N first images.
Step 303: and respectively carrying out mechanical tool identification on the N first images to obtain M mechanical tool images, wherein M is less than or equal to N.
Step 304: and respectively matching the M mechanical tool images with K mechanical tool templates to obtain a first identification result, wherein different mechanical tool templates correspond to different mechanical tool types, and K is a positive integer.
Step 305: whether the first identification result is that heavy machinery exists in the monitored area or not.
If yes, go to step 306;
if not, no operation is executed.
If the M mechanical tool images are not matched with the K mechanical tool templates, the first identification result indicates that the heavy machinery does not exist in the monitored area; and if at least one of the M mechanical tool images is matched with at least one of the K mechanical tool templates, the first identification result indicates that the heavy machinery exists in the monitored area.
Step 306: a first risk level of the heavy machinery and a first volume of the heavy machinery are determined.
Step 307: it is determined whether the first risk level is greater than or equal to a first threshold.
If yes, go to step 308;
if not, go to step 309.
Step 308: the heavy machinery is at a first hazard level. After step 308, step 313 is performed.
Step 309: and judging whether the first danger degree is greater than or equal to a second threshold value or not, and whether the first volume is greater than or equal to a third threshold value or not, wherein the second threshold value is smaller than the first threshold value.
If yes, go to step 310;
if not, go to step 311.
Step 310: the heavy machinery is at a second risk level, and the first risk level is greater than the second risk level. After step 310 is performed, step 313 is performed.
Step 311: and judging whether the first danger degree is smaller than the second threshold value or not, and whether the first volume is smaller than the third threshold value or not.
If yes, go to step 312;
if not, no operation is executed.
Step 312: the heavy machinery is at a third risk level, the second risk level being greater than the third risk level. After step 312, step 313 is performed.
Step 313: and controlling audio output equipment to output warning information corresponding to the danger level, wherein the warning information is used for prompting to be far away from the monitoring area. After the step 313 is executed, the steps 314-.
Step 314: and controlling the camera to track and shoot the heavy machinery to obtain second video information.
Step 315: and sampling the second video information to obtain S second images.
Step 316: and carrying out face recognition or portrait recognition on the S second images to obtain a second recognition result.
Step 317: and when the second identification result is that people exist in the monitored area, turning up the volume of the audio output equipment, and controlling the audio output equipment to output the warning information.
Step 318: and controlling the camera to track and shoot the heavy machinery to obtain third video information.
Step 319: and sending the third video information to the user equipment associated with the electronic equipment.
In accordance with the embodiments shown in fig. 2 and fig. 3, please refer to fig. 4, and fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and as shown in the figure, the electronic device includes a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for performing the following steps:
acquiring first video information of a first area, wherein the first area is a monitoring area of a camera;
sampling the first video information to obtain N first images, wherein N is a positive integer;
carrying out image recognition on the N first images to obtain a first recognition result;
determining the danger level of the heavy machinery under the condition that the first identification result is that the heavy machinery exists in the monitored area;
and controlling audio output equipment to output warning information corresponding to the danger level, wherein the warning information is used for prompting to be far away from the monitoring area.
In an implementation of the present application, in sampling the first video information to obtain N first images, the program includes instructions specifically configured to:
determining a sampling interval, and sampling the first video information based on the sampling interval to obtain the N first images.
In an implementation manner of the present application, in terms of performing image recognition on the N first images to obtain a first recognition result, the program includes instructions specifically configured to:
respectively carrying out mechanical tool identification on the N first images to obtain M mechanical tool images, wherein M is less than or equal to N;
matching the M mechanical tool images with K mechanical tool templates respectively to obtain a first identification result, wherein different mechanical tool templates correspond to different mechanical tool types, and K is a positive integer;
if the M mechanical tool images are not matched with the K mechanical tool templates, the first identification result indicates that no heavy machinery exists in the monitored area; and if at least one of the M mechanical tool images is matched with at least one of the K mechanical tool templates, the first identification result indicates that the heavy machinery exists in the monitored area.
In an implementation of the application, the above-mentioned program comprises instructions for carrying out the following steps in particular, in connection with determining the risk level of the heavy machine:
determining a first risk level of the heavy machinery and a first volume of the heavy machinery;
determining the hazard level based on the first hazard level and the first volume.
In an implementation of the application, the program comprises instructions for performing the following steps in particular in determining the risk level based on the first risk level and/or the first volume:
if the first risk level is greater than or equal to a first threshold, the heavy machinery is at a first risk level;
if the first risk level is greater than or equal to a second threshold value and the first volume is greater than or equal to a third threshold value, the heavy machinery is of a second risk level, the second threshold value is smaller than the first threshold value, and the first risk level is greater than the second risk level;
if the first risk degree is smaller than the second threshold value and the first volume is smaller than a third threshold value, the heavy machinery is of a third risk level, and the second risk level is larger than the third risk level.
In an implementation manner of the present application, after controlling the audio output device to output the warning information corresponding to the danger level, the program includes instructions specifically configured to:
controlling the camera to perform tracking shooting on the heavy machinery to obtain second video information;
sampling the second video information to obtain S second images;
carrying out face recognition or portrait recognition on the S second images to obtain a second recognition result;
and when the second identification result is that people exist in the monitored area, turning up the volume of the audio output equipment, and controlling the audio output equipment to output the warning information.
In an implementation manner of the present application, after controlling the audio output device to output the warning information corresponding to the danger level, the program includes instructions specifically configured to:
controlling the camera to perform tracking shooting on the heavy machinery to obtain third video information;
and sending the third video information to the user equipment associated with the electronic equipment.
Referring to fig. 5, fig. 5 is a schematic view of a danger warning device applied to an electronic device according to an embodiment of the present disclosure, the device including:
an obtaining unit 501, configured to obtain first video information of a first area, where the first area is a monitoring area of a camera;
a sampling unit 502, configured to sample the first video information to obtain N first images, where N is a positive integer;
the recognition unit 503 is configured to perform image recognition on the N first images to obtain a recognition result;
a determining unit 504, configured to determine a danger level of the heavy machinery if the first identification result is that heavy machinery exists in the monitored area;
and the output unit 505 is configured to control an audio output device to output warning information corresponding to the danger level, where the warning information is used to prompt that the monitored area is far away.
In an implementation manner of the present application, in sampling the first video information to obtain N first images, the sampling unit 502 is further specifically configured to execute the following instructions:
determining a sampling interval, and sampling the first video information based on the sampling interval to obtain the N first images.
In an implementation manner of the present application, in terms of performing image recognition on the N first images to obtain a first recognition result, the recognition unit 503 is further specifically configured to execute the following steps:
respectively carrying out mechanical tool identification on the N first images to obtain M mechanical tool images, wherein M is less than or equal to N;
matching the M mechanical tool images with K mechanical tool templates respectively to obtain a first identification result, wherein different mechanical tool templates correspond to different mechanical tool types, and K is a positive integer;
if the M mechanical tool images are not matched with the K mechanical tool templates, the first identification result indicates that no heavy machinery exists in the monitored area; and if at least one of the M mechanical tool images is matched with at least one of the K mechanical tool templates, the first identification result indicates that the heavy machinery exists in the monitored area.
In an implementation of the application, in determining the hazard level of the heavy machinery, the determining unit 504 is further specifically configured to execute the following steps:
determining a first risk level of the heavy machinery and a first volume of the heavy machinery;
determining the hazard level based on the first hazard level and the first volume.
In an implementation manner of the present application, in determining the risk level based on the first risk level and/or the first volume, the determining unit 504 is further specifically configured to execute the following steps:
if the first risk level is greater than or equal to a first threshold, the heavy machinery is at a first risk level;
if the first risk degree is greater than or equal to a second threshold value and the first volume is greater than or equal to a third threshold value, the heavy machinery is of a second risk level, the second threshold value is smaller than the first threshold value, and the first risk level is greater than the second risk level;
if the first risk degree is smaller than the second threshold value and the first volume is smaller than the third threshold value, the heavy machinery is of a third risk level, and the second risk level is larger than the third risk level.
In an implementation manner of the present application, after controlling the audio output device to output the warning information corresponding to the danger level, the obtaining unit 501 is further specifically configured to execute the following steps:
and controlling the camera to track and shoot the heavy machinery to obtain second video information.
In an implementation manner of the present application, after controlling the audio output device to output the warning information corresponding to the danger level, the sampling unit 502 is further specifically configured to execute the following steps:
and sampling the second video information to obtain S second images.
In an implementation manner of the present application, after controlling the audio output device to output the warning information corresponding to the danger level, the identifying unit 503 is further specifically configured to execute the following steps:
and carrying out face recognition or portrait recognition on the S second images to obtain a second recognition result.
In an implementation manner of the present application, after controlling the audio output device to output the warning information corresponding to the danger level, the output unit 505 is further specifically configured to execute the following steps:
and when the second identification result is that people exist in the monitored area, turning up the volume of the audio output equipment, and controlling the audio output equipment to output the warning information.
In an implementation manner of the present application, after controlling the audio output device to output the warning information corresponding to the danger level, the apparatus further includes a tracking unit 506, specifically configured to execute the following steps:
and controlling the camera to track and shoot the heavy machinery to obtain third video information.
In an implementation manner of the present application, after controlling the audio output device to output the warning information corresponding to the danger level, the apparatus further includes a sending unit 507, specifically configured to execute instructions for:
and sending the third video information to the user equipment associated with the electronic equipment.
The acquiring unit 501, the sampling unit 502, the identifying unit 503, the determining unit 504, the outputting unit 505, the tracking unit 506, and the sending unit 507 of the electronic device may be implemented by a processor.
The present application also provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program makes a computer perform some or all of the steps described in the electronic device in the above method embodiments.
Embodiments of the present application also provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps described in the electronic device in the above method. The computer program product may be a software installation package.
The steps of a method or algorithm described in the embodiments of the present application may be implemented in hardware, or may be implemented by a processor executing software instructions. The software instructions may be comprised of corresponding software modules that may be stored in Random Access Memory (RAM), flash Memory, Read Only Memory (ROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a compact disc Read only Memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in an access network device, a target network device, or a core network device. Of course, the processor and the storage medium may reside as discrete components in an access network device, a target network device, or a core network device.
Those skilled in the art will appreciate that in one or more of the examples described above, the functionality described in the embodiments of the present application may be implemented, in whole or in part, by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the application to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored on a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website, computer, server, or data center to another website, computer, server, or data center via wire (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., a floppy Disk, a hard Disk, a magnetic tape), an optical medium (e.g., a Digital Video Disk (DVD)), or a semiconductor medium (e.g., a Solid State Disk (SSD)), among others.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the embodiments of the present application in further detail, and it should be understood that the above-mentioned embodiments are only specific embodiments of the present application, and are not intended to limit the scope of the embodiments of the present application, and any modifications, equivalent substitutions, improvements and the like made on the basis of the technical solutions of the embodiments of the present application should be included in the scope of the embodiments of the present application.

Claims (10)

1. A danger warning method is applied to electronic equipment, and the method comprises the following steps:
acquiring first video information of a first area, wherein the first area is a monitoring area of a camera;
sampling the first video information to obtain N first images, wherein N is a positive integer;
carrying out image recognition on the N first images to obtain a first recognition result;
determining the danger level of the heavy machinery under the condition that the first identification result is that the heavy machinery exists in the monitored area;
and controlling audio output equipment to output warning information corresponding to the danger level, wherein the warning information is used for prompting to be far away from the monitoring area.
2. The method of claim 1, wherein said sampling said first video information to obtain N first images comprises:
determining a sampling interval, and sampling the first video information based on the sampling interval to obtain the N first images.
3. The method according to claim 1 or 2, wherein the performing image recognition on the N first images to obtain a first recognition result comprises:
respectively carrying out mechanical tool identification on the N first images to obtain M mechanical tool images, wherein M is less than or equal to N;
matching the M mechanical tool images with K mechanical tool templates respectively to obtain a first identification result, wherein different mechanical tool templates correspond to different mechanical tool types, and K is a positive integer;
if the M mechanical tool images are not matched with the K mechanical tool templates, the first identification result indicates that no heavy machinery exists in the monitored area; and if at least one of the M mechanical tool images is matched with at least one of the K mechanical tool templates, the first identification result indicates that the heavy machinery exists in the monitored area.
4. The method of any of claims 1-3, wherein the determining the hazard level of the heavy machinery comprises:
determining a first risk level of the heavy machinery and a first volume of the heavy machinery;
determining the hazard level based on the first hazard level and the first volume.
5. The method of claim 4, wherein said determining the risk level based on the first risk level and/or the first volume comprises:
if the first risk level is greater than or equal to a first threshold, the heavy machinery is at a first risk level;
if the first risk degree is greater than or equal to a second threshold value and the first volume is greater than or equal to a third threshold value, the heavy machinery is of a second risk level, the second threshold value is smaller than the first threshold value, and the first risk level is greater than the second risk level;
if the first risk degree is smaller than the second threshold value and the first volume is smaller than the third threshold value, the heavy machinery is of a third risk level, and the second risk level is larger than the third risk level.
6. The method according to any one of claims 1-5, wherein after the controlling an audio output device to output the warning information corresponding to the danger level, the method further comprises:
controlling the camera to perform tracking shooting on the heavy machinery to obtain second video information;
sampling the second video information to obtain S second images;
carrying out face recognition or portrait recognition on the S second images to obtain a second recognition result;
and when the second identification result is that people exist in the monitored area, turning up the volume of the audio output equipment, and controlling the audio output equipment to output the warning information.
7. The method according to any one of claims 1-5, wherein after the controlling an audio output device to output the warning information corresponding to the danger level, the method further comprises:
controlling the camera to perform tracking shooting on the heavy machinery to obtain third video information;
and sending the third video information to the user equipment associated with the electronic equipment.
8. A hazard warning device for use with electronic equipment, the device comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring first video information of a first area, and the first area is a monitoring area of a camera;
the sampling unit is used for sampling the first video information to obtain N first images, wherein N is a positive integer;
the recognition unit is used for carrying out image recognition on the N first images to obtain a recognition result;
a determining unit, configured to determine a danger level of the heavy machinery if the first identification result is that the heavy machinery exists in the monitored area;
and the output unit is used for controlling audio output equipment to output warning information corresponding to the danger level, and the warning information is used for prompting to be far away from the monitoring area.
9. An electronic device, comprising a processor, memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program, wherein the computer program is processed to perform the method according to any of claims 1-7.
CN201911217042.XA 2019-11-29 2019-11-29 Danger warning method and related equipment Pending CN110930657A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911217042.XA CN110930657A (en) 2019-11-29 2019-11-29 Danger warning method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911217042.XA CN110930657A (en) 2019-11-29 2019-11-29 Danger warning method and related equipment

Publications (1)

Publication Number Publication Date
CN110930657A true CN110930657A (en) 2020-03-27

Family

ID=69848322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911217042.XA Pending CN110930657A (en) 2019-11-29 2019-11-29 Danger warning method and related equipment

Country Status (1)

Country Link
CN (1) CN110930657A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090009599A1 (en) * 2007-07-03 2009-01-08 Samsung Techwin Co., Ltd. Intelligent surveillance system and method of controlling the same
CN103544498A (en) * 2013-09-25 2014-01-29 华中科技大学 Video content detection method and video content detection system based on self-adaption sampling
US20180100925A1 (en) * 2011-04-15 2018-04-12 Continental Advanced Lidar Solutions Us, Llc Ladar sensor for landing, docking and approach
CN110111016A (en) * 2019-05-14 2019-08-09 深圳供电局有限公司 Method and device for monitoring dangerous state of operating personnel and computer equipment
CN110121053A (en) * 2018-02-07 2019-08-13 中国石油化工股份有限公司 A kind of video monitoring method of situ of drilling well risk stratification early warning
CN110379125A (en) * 2019-07-24 2019-10-25 广东电网有限责任公司 Cross the border recognition methods, system and relevant apparatus for a kind of danger zone

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090009599A1 (en) * 2007-07-03 2009-01-08 Samsung Techwin Co., Ltd. Intelligent surveillance system and method of controlling the same
US20180100925A1 (en) * 2011-04-15 2018-04-12 Continental Advanced Lidar Solutions Us, Llc Ladar sensor for landing, docking and approach
CN103544498A (en) * 2013-09-25 2014-01-29 华中科技大学 Video content detection method and video content detection system based on self-adaption sampling
CN110121053A (en) * 2018-02-07 2019-08-13 中国石油化工股份有限公司 A kind of video monitoring method of situ of drilling well risk stratification early warning
CN110111016A (en) * 2019-05-14 2019-08-09 深圳供电局有限公司 Method and device for monitoring dangerous state of operating personnel and computer equipment
CN110379125A (en) * 2019-07-24 2019-10-25 广东电网有限责任公司 Cross the border recognition methods, system and relevant apparatus for a kind of danger zone

Similar Documents

Publication Publication Date Title
KR102706284B1 (en) Object tracking method and electronic device
US11080434B2 (en) Protecting content on a display device from a field-of-view of a person or device
EP3163498B1 (en) Alarming method and device
CN110990801B (en) Information verification method and device, electronic equipment and storage medium
CN111783392B (en) Drawing examination result display method and related equipment
CN112669583A (en) Alarm threshold value adjusting method and device, electronic equipment and storage medium
CN111601063B (en) Video processing method and electronic equipment
CN113452845B (en) Method for identifying abnormal telephone number and electronic equipment
CN109151348B (en) Image processing method, electronic equipment and computer readable storage medium
CN109635700B (en) Obstacle recognition method, device, system and storage medium
CN105842680B (en) A kind of data processing method and device
CN110519503B (en) Method for acquiring scanned image and mobile terminal
CN110944113A (en) Object display method and electronic equipment
CN113033584B (en) Data processing method and related equipment
CN110086987B (en) Camera visual angle cutting method and device and storage medium
CN108959960B (en) Method, device and computer readable storage medium for preventing privacy disclosure
CN108279805B (en) Protective film detection method, device and storage medium
CN110930657A (en) Danger warning method and related equipment
CN111427644B (en) Target behavior identification method and electronic equipment
CN112528800B (en) Problem display method of CAD drawing and related equipment
KR20140077094A (en) Anti Phishing system and method
CN111103607A (en) Positioning prompting method and electronic equipment
CN108520727B (en) Screen brightness adjusting method and mobile terminal
US20140273989A1 (en) Method and apparatus for filtering devices within a security social network
CN113126857B (en) Data processing method and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200327

RJ01 Rejection of invention patent application after publication