CN116012939A - Method and device for determining illegal behaviors, storage medium and electronic device - Google Patents

Method and device for determining illegal behaviors, storage medium and electronic device Download PDF

Info

Publication number
CN116012939A
CN116012939A CN202211656957.2A CN202211656957A CN116012939A CN 116012939 A CN116012939 A CN 116012939A CN 202211656957 A CN202211656957 A CN 202211656957A CN 116012939 A CN116012939 A CN 116012939A
Authority
CN
China
Prior art keywords
track
determining
image
area
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211656957.2A
Other languages
Chinese (zh)
Inventor
吴思铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202211656957.2A priority Critical patent/CN116012939A/en
Publication of CN116012939A publication Critical patent/CN116012939A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The application discloses a method for determining illegal behaviors, wherein the method comprises the following steps: acquiring image data acquired by an image acquisition device aiming at a target area in a preset time period, wherein the target area comprises an area below the target device, and the image acquisition device is arranged at the bottom of the target device so as to acquire images of the target area; processing a plurality of frames of images included in the image data to obtain an alarm area corresponding to the plurality of frames of images and a movement track of a first object in the target area; and determining whether the first object has an illegal action according to the final track position of the action track and the position distribution relation of the alarm area, wherein the illegal action represents the abnormal action associated with the target area.

Description

Method and device for determining illegal behaviors, storage medium and electronic device
Technical Field
The application relates to the technical field of security monitoring, in particular to a method and a device for determining illegal behaviors, a storage medium and an electronic device.
Background
At present, in common scenes such as manufacturing industry, transportation industry and the like, hoisting mechanical equipment such as a crane and a mobile crane is generally used for operation, so that heavy manual labor can be effectively reduced, and the working efficiency is improved. However, these hoisting mechanical devices bring industrial and mechanical benefits and also bring certain potential safety hazards, for example, when the hoisting mechanical devices have accidents such as unhooking and derailment, if operators under the boom of the devices occur in a violation manner, life hazards can be brought to the operators.
Aiming at the potential safety hazard problem, the traditional supervision mode mainly comprises the step of arranging a supervisor to supervise on site, and correcting the illegal situation if the illegal situation is found. However, due to the fact that the factory environment is complex, a certain visual blind area exists in a monitoring area of a supervisor, the illegal situation cannot be found out in time or the supervisor is reminded, and therefore supervision safety is low.
In the prior art, a monitoring alarm scheme also exists, and mainly a camera is arranged at the side surface of the hoisting mechanical equipment, so that whether an operator has a violation problem or not is judged according to the collected video data. However, in this scheme, because the view angle of the installed camera is fixed, only video pictures at a fixed position can be acquired, and for the scene that crane mechanical devices such as bridge cranes move on the track, the moving monitoring of operators cannot be performed, and the illegal behaviors of the operators cannot be accurately identified and alarm information can be timely sent.
Therefore, in the related art, under the condition that the hoisting mechanical equipment moves along the track, the camera shooting area is fixed, so that the mobile monitoring cannot be performed on the operators, and the technical problem that the illegal behaviors of the operators cannot be accurately identified is solved.
Aiming at the technical problems that in the related art, under the condition that hoisting mechanical equipment moves along a track, the camera shooting area is fixed, the operation personnel cannot be monitored in a moving way, and the illegal behaviors of the operation personnel cannot be identified accurately, no effective solution is proposed yet.
Disclosure of Invention
The embodiment of the application provides a method and a device for determining illegal behaviors, a storage medium and an electronic device, which are used for at least solving the technical problem that in the related art, under the condition that hoisting mechanical equipment moves along a track, the illegal behaviors of operators cannot be accurately identified because a shooting area is fixed and operators cannot be monitored in a moving way.
According to an embodiment of the embodiments of the present application, there is provided a method for determining an offence, including: acquiring image data acquired by an image acquisition device for a target area in a preset time period, wherein the target area comprises an area below the target device; the image acquisition equipment is arranged at the bottom of the target equipment so as to acquire images of the target area; processing a plurality of frames of images included in the image data to obtain an alarm area corresponding to the plurality of frames of images and a movement track of a first object in the target area; and determining whether the first object has an illegal action according to the final track position of the action track and the position distribution relation of the alarm area, wherein the illegal action represents the abnormal action associated with the target area.
In an exemplary embodiment, processing a plurality of frames of images included in the image data to obtain an alarm area corresponding to the plurality of frames of images includes: dividing each frame of image based on the geometric center of each frame of image aiming at each frame of image of the multi-frame image to obtain divided multi-frame images, wherein each frame of image of the divided multi-frame images is provided with a plurality of dividing areas; transmitting a calibration prompt message containing the divided multi-frame images to a second object, and receiving feedback information transmitted by the second object in response to the calibration prompt message, wherein the feedback information comprises a result of calibrating region types of a plurality of division regions of each frame of image of the divided multi-frame images by the second object, and the region types comprise: an alert region and a non-alert region; and determining the dividing region with the region type indicated in the feedback information as an alarm region corresponding to the multi-frame image.
In an exemplary embodiment, processing the multi-frame image included in the image data to obtain an action track of the first object in the target area includes: acquiring a current frame image and other frame images before the current frame image in the multi-frame images, and determining a mapping relation between the current frame image and the other frame images; wherein the mapping relationship represents a mathematical relationship required when converting the coordinate position of the first object in the other frame image into the coordinate position on the coordinate system of the current frame image; acquiring a first coordinate position of the first object in the other frame images, and determining a second coordinate position corresponding to the first coordinate position on a coordinate system of the current frame image according to the mapping relation; acquiring a third coordinate position of the first object on a coordinate system of the current frame image; and determining the action track according to the second coordinate positions and the third coordinate positions.
In an exemplary embodiment, determining the action trajectory from the plurality of second coordinate positions and the third coordinate position includes: determining the sum of the following parameters: the third coordinate position, the second coordinate position and the position difference of the third coordinate position; determining a fourth coordinate position on the coordinate system of the current frame image according to the sum value, wherein the fourth coordinate position is the position of a fifth object on the coordinate system of the current frame image; determining a first object feature when the fifth object is positioned at the fourth coordinate position, a second object feature corresponding to the first object when the first object is positioned at the second coordinate position, and a third object feature corresponding to the first object when the first object is positioned at the third coordinate position; and determining that the fifth object is consistent with the first object under the condition that the feature overlap ratio between the first object feature and the second object feature is larger than a preset value and the feature overlap ratio between the first object feature and the third object feature is larger than the preset value, and determining the action track of the first object in the current frame image according to the second coordinate position, the third coordinate position and the fourth coordinate position.
In one exemplary embodiment, the position difference values of the plurality of the second coordinate positions and the third coordinate positions are determined by: determining second coordinate positions corresponding to first frame images of a plurality of frames of other frame images from the second coordinate positions, determining second coordinate positions corresponding to tail frame images of the other frame images, and determining position offset values between the second coordinate positions corresponding to the first frame images of the other frame images and the second coordinate positions corresponding to the tail frame images of the other frame images; and acquiring the frame numbers of the other frame images, determining the ratio of the position offset to the frame numbers as a target second coordinate position, and determining the position difference value of the target second coordinate position and the third coordinate position as the position difference value of the second coordinate positions and the third coordinate position.
In an exemplary embodiment, determining the mapping relationship between the current frame image and the other frame images includes: under the condition that a third object exists in the current frame image and the other frame images, acquiring a first image characteristic of the third object in the current frame image and a second image characteristic of the third object in the other frame images; and calculating a transformation matrix between the first image feature and the second image feature according to a preset algorithm to obtain a mapping relation between the first image feature and the second image feature.
In an exemplary embodiment, before processing the multi-frame image included in the image data to obtain the action track of the first object in the target area, the method further includes: determining the first object from the multi-frame image, wherein the determining the first object from the multi-frame image includes: acquiring different objects identified in the multi-frame image; identifying the different objects according to the object characteristics of the different objects to obtain object types corresponding to the different objects; and acquiring a fourth object with the same object type in the different objects, and determining the fourth object as the first object under the condition that the identity of the fourth object is consistent with the identity type of the first object.
In one exemplary embodiment, in acquiring the different objects identified in the multi-frame image, the method further comprises: for each object of the different objects, determining a first object area preset for each object, and acquiring a second object area of each object in the multi-frame image; and determining the object feature corresponding to the first object area as the object feature of each object under the condition that the difference value between the first object area and the second object area is smaller than or equal to a preset value.
In an exemplary embodiment, determining whether the first object has an offence according to a position distribution relationship between a last track position of the action track and the alert area includes: acquiring a preset number of continuous track points from all track points included in the final track position, and acquiring each track ray of each track point of the continuous track points on a preset angle; acquiring the number of intersection points of each track ray and the boundary line of the alarm region under the condition that the boundary line of each track ray and the alarm region are determined to have the intersection points; and under the condition that the number of the intersections of each track ray and the boundary line of the alarm region meets the preset condition, if the last track position of the action track is determined to be positioned in the alarm region, determining that the first object has the illegal action.
In an exemplary embodiment, determining whether the first object has an offence according to a position distribution relationship between a last track position of the action track and the alert area includes: acquiring a preset number of continuous track points from all track points included in the final track position, and acquiring any track ray of any track point of the continuous track points at the preset angle; and under the condition that the number of the intersections of any track ray and the boundary line of the alarm region does not meet the preset condition, determining that the last track position of the action track is not positioned in the alarm region, and determining that the first object has no illegal action.
In one exemplary embodiment, after determining that the first object has a violation, the method further comprises: and sending alarm information to the first object under the condition that the final track position is positioned in the alarm area.
According to another embodiment of the embodiments of the present application, there is also provided a device for determining an offence, including: the acquisition module is used for acquiring image data acquired by the image acquisition equipment aiming at a target area in a preset time period, wherein the target area comprises an area below the target equipment, and the image acquisition equipment is arranged at the bottom of the target equipment so as to acquire images of the target area; the obtaining module is used for processing the multi-frame images included in the image data to obtain an alarm area corresponding to the multi-frame images and a movement track of the first object in the target area; and the sending module is used for determining whether the first object has illegal behaviors according to the final track position of the action track and the position distribution relation of the alarm area, wherein the illegal behaviors represent the abnormal behaviors associated with the target area.
According to a further aspect of embodiments of the present application, there is also provided a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the above method when run.
According to still another aspect of the embodiments of the present application, there is further provided an electronic device including a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor executes the above method by the computer program.
In the embodiment of the application, after an image acquisition device at the bottom of a target device is used for acquiring an image of an area below the target device in a preset time period, acquired image data are obtained, and multi-frame images included in the image data are further processed to obtain an alarm area corresponding to the multi-frame images and a movement track of a first object in the target area; determining whether the first object has illegal behaviors according to the position distribution relation between the last track position of the action track and the alarm area; by adopting the technical scheme, the technical problem that the illegal behaviors of the operators cannot be accurately identified due to the fact that the shooting area is fixed under the condition that the hoisting mechanical equipment moves along the track is solved, the mobile monitoring of the operators is realized, and the technical effect of improving the identification accuracy of the illegal behaviors of the operators is achieved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute an undue limitation to the application. In the drawings:
fig. 1 is a hardware block diagram of a computer terminal of a method for determining an offence according to an embodiment of the present application;
FIG. 2 is a flow chart of a method of determining offensiveness according to an embodiment of the present application;
FIG. 3 is a flow chart of a method of determining offensiveness according to another embodiment of the present application;
FIG. 4 is a schematic illustration of a target area according to an alternative embodiment of the present application;
FIG. 5 (a) is a schematic illustration (one) of an alert zone according to an alternative embodiment of the present application;
FIG. 5 (b) is a schematic diagram (II) of an alert zone according to an alternative embodiment of the present application;
FIG. 6 is a flow diagram of a method of determining offensiveness according to one embodiment of the present application;
FIG. 7 is a flow chart of a method of determining offensiveness according to yet another embodiment of the present application;
fig. 8 is a block diagram of the configuration of an apparatus for determining an offence according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will be made in detail and with reference to the accompanying drawings in the embodiments of the present application, it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, shall fall within the scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The method embodiments provided in the embodiments of the present application may be executed in a computer terminal or similar computing device. Taking the example of running on a computer terminal, fig. 1 is a hardware block diagram of the computer terminal of a method for determining an offence according to an embodiment of the present application. As shown in fig. 1, the computer terminal may include one or more (only one is shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory 104 for storing data, and in one exemplary embodiment, may also include a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the configuration shown in fig. 1 is merely illustrative and is not intended to limit the configuration of the computer terminal described above. For example, a computer terminal may also include more or fewer components than shown in FIG. 1, or have a different configuration than the equivalent functions shown in FIG. 1 or more than the functions shown in FIG. 1.
The memory 104 may be used to store computer programs, such as software programs and modules of application software, such as computer programs corresponding to the methods in the embodiments of the present application, and the processor 102 executes the computer programs stored in the memory 104 to perform various functional applications and data processing, i.e., implement the methods described above. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the computer terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of a computer terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
In this embodiment, a method for determining an offence is provided and applied to a terminal device, and fig. 2 is a flowchart of the method for determining an offence according to an embodiment of the present application, where the flowchart includes the following steps:
step S202, acquiring image data acquired by an image acquisition device for a target area in a preset time period, wherein the target area comprises an area below the target device, and the image acquisition device is arranged at the bottom of the target device so as to acquire images of the target area;
it should be noted that, the target device may be mounted on a guide rail, and the target device may be horizontally moved on the guide rail. During the horizontal movement of the target device, the image capturing device mounted at the bottom of the target device also moves horizontally therewith.
It should be noted that the above-mentioned preset time period may be preset manually, which is not limited in this application. Further, the image data collected in the preset time period may include multiple frames of images at successive moments.
Step S204, processing a plurality of frames of images included in the image data to obtain an alarm area corresponding to the plurality of frames of images and a movement track of a first object in the target area;
Step S206, determining whether the first object has an illegal action according to the final track position of the action track and the position distribution relation of the alarm area, wherein the illegal action represents the abnormal action associated with the target area.
Through the steps, image data acquired by the image acquisition equipment aiming at a target area in a preset time period are acquired, wherein the target area comprises an area below the target equipment, and the image acquisition equipment is arranged at the bottom of the target equipment so as to acquire images of the target area; processing a plurality of frames of images included in the image data to obtain an alarm area corresponding to the plurality of frames of images and a movement track of a first object in the target area; determining whether the first object has an illegal action according to the final track position of the action track and the position distribution relation of the alarm area, wherein the illegal action represents the abnormal action associated with the target area; by adopting the technical scheme, the technical problem that the illegal behaviors of the operators cannot be accurately identified due to the fact that the shooting area is fixed under the condition that the hoisting mechanical equipment moves along the track is solved, the mobile monitoring of the operators is realized, and the technical effect of improving the identification accuracy of the illegal behaviors of the operators is achieved.
It should be noted that, the last track position of the action track may include, but is not limited to, a track point corresponding to one or more frames. For example, when the last track position of the action track includes a track point corresponding to one frame, the track point corresponding to the current frame image may be determined as the last track position of the action track, or the track point corresponding to the last frame image in the multi-frame image may be determined as the last track position of the action track. And in the case that the last track position of the action track includes track points corresponding to multiple frames, the last track position of the action track can be determined according to track points corresponding to the current frame image and other frame images before the current frame image, or the last track position of the action track can be determined according to track points corresponding to the last frame image in multiple frame images and images before the last frame image.
The present application is not limited to this, and it is possible to define that the current frame image and other frame images preceding the current frame image are consecutive frames, or define that the last frame image and images preceding the last frame image are consecutive frames.
There are various implementations of the step S204, in an exemplary embodiment, processing a plurality of frames of images included in the image data to obtain an alarm area corresponding to the plurality of frames of images includes: dividing each frame of image based on the geometric center of each frame of image aiming at each frame of image of the multi-frame image to obtain divided multi-frame images, wherein each frame of image of the divided multi-frame images is provided with a plurality of dividing areas; transmitting a calibration prompt message containing the divided multi-frame images to a second object, and receiving feedback information transmitted by the second object in response to the calibration prompt message, wherein the feedback information comprises a result of calibrating region types of a plurality of division regions of each frame of image of the divided multi-frame images by the second object, and the region types comprise: an alert region and a non-alert region; and determining the dividing region with the region type indicated in the feedback information as an alarm region corresponding to the multi-frame image.
That is, through the above embodiment, the multi-frame image may be divided to obtain the region types of the multiple division regions of each frame image, the alarm region corresponding to the multi-frame image may be determined according to the division region with the region type as the alarm region, the multi-frame image alarm region may be dynamically determined, a foundation is laid for a subsequent process of determining whether the action track of the first object in the target region is located in the alarm region, and thus, a preparation condition for sending alarm information is provided, and the executable performance for sending alarm information is improved.
In an exemplary embodiment, the step S204 may be further implemented by: acquiring a current frame image and other frame images before the current frame image in the multi-frame images, and determining a mapping relation between the current frame image and the other frame images; wherein the mapping relationship represents a mathematical relationship required when converting the coordinate position of the first object in the other frame image into the coordinate position on the coordinate system of the current frame image; acquiring a first coordinate position of the first object in the other frame images, and determining a second coordinate position corresponding to the first coordinate position on a coordinate system of the current frame image according to the mapping relation; acquiring a third coordinate position of the first object on a coordinate system of the current frame image; and determining the action track according to the second coordinate positions and the third coordinate positions.
Further, in one embodiment, a process of determining the action track according to a plurality of the second coordinate positions and the third coordinate positions is described: determining the sum of the following parameters: the third coordinate position, the second coordinate position and the position difference of the third coordinate position; determining a fourth coordinate position on the coordinate system of the current frame image according to the sum value, wherein the fourth coordinate position is the position of a fifth object on the coordinate system of the current frame image; determining a first object feature when the fifth object is positioned at the fourth coordinate position, a second object feature corresponding to the first object when the first object is positioned at the second coordinate position, and a third object feature corresponding to the first object when the first object is positioned at the third coordinate position; and determining that the fifth object is consistent with the first object under the condition that the feature overlap ratio between the first object feature and the second object feature is larger than a preset value and the feature overlap ratio between the first object feature and the third object feature is larger than the preset value, and determining the action track of the first object in the current frame image according to the second coordinate position, the third coordinate position and the fourth coordinate position.
In one embodiment, the position difference values of the plurality of second coordinate positions and the third coordinate position may be determined by: determining second coordinate positions corresponding to first frame images of a plurality of frames of other frame images from the second coordinate positions, determining second coordinate positions corresponding to tail frame images of the other frame images, and determining position offset values between the second coordinate positions corresponding to the first frame images of the other frame images and the second coordinate positions corresponding to the tail frame images of the other frame images; and acquiring the frame numbers of the other frame images, determining the ratio of the position offset to the frame numbers as a target second coordinate position, and determining the position difference value of the target second coordinate position and the third coordinate position as the position difference value of the second coordinate positions and the third coordinate position.
In this embodiment, for example, the first frame image of the plurality of frames of the other frame images may be taken as a 1 st frame image, and the last frame image of the plurality of frames of the other frame images may be taken as a 10 th frame image, and then the number of frames of the plurality of frames of the other frame images is 10, and the ratio of the position offset to 10 may be determined as the position difference values of the plurality of second coordinate positions and the third coordinate position.
Optionally, in other embodiments, the action track may be determined according to the single second coordinate position and the third coordinate position, specifically: determining the sum of the following parameters: the third coordinate position, the second coordinate position and the position difference of the third coordinate position; determining a fourth coordinate position on the coordinate system of the current frame image according to the sum value; and determining the action track of the first object in the current frame image according to the second coordinate position, the third coordinate position and the fourth coordinate position.
Wherein, further, determining the action track of the first object in the current frame image according to the second coordinate position, the third coordinate position and the fourth coordinate position may further be implemented by the following steps: determining that the fifth object is consistent with the first object under the condition that the feature overlap ratio between the first object feature and the second object feature is larger than a preset value and the feature overlap ratio between the first object feature and the third object feature is larger than the preset value, and determining the action track of the first object in the current frame image according to the second coordinate position, the third coordinate position and the fourth coordinate position; the first object feature is an object feature when the fifth object is located at the fourth coordinate position, the second object feature is an object feature corresponding to the first object when the first object is located at the second coordinate position, and the third object feature is an object feature corresponding to the first object when the first object is located at the third coordinate position.
Compared with the technical scheme that the action track is determined according to the single second coordinate position and the third coordinate position, the position error in the process of converting the coordinate positions can be reduced by determining the action track according to the plurality of second coordinate positions and the third coordinate positions, the more accurate coordinate positions are obtained, and further accuracy in determining the action track is improved.
Through the embodiment, the action track of the first object in the target area is determined through the second coordinate position of the first object corresponding to the coordinate system of the current frame image and the third coordinate position of the first object on the coordinate system of the current frame image according to the mapping relation between the current frame image and the other frame images, the position change relation of the first object in the multi-frame image can be shown in a dynamic mode through the action track, the technical purpose of dynamically monitoring the first object is achieved, the probability that the first object cannot be timely warned due to the fact that the first object cannot be monitored is reduced, and the safety of the first object is improved.
Optionally, in other embodiments, when it is determined that the feature overlap ratio between the first object feature and the second object feature is smaller than a preset value, and any one of the feature overlap ratios between the first object feature and the third object feature is smaller than the preset value, it is determined that the fifth object is inconsistent with the first object, and an action track of the fifth object in the current frame image is determined based on the fourth coordinate position.
Through the embodiment, under the condition that the fifth object is inconsistent with the first object, the action track of the fifth object in the current frame image can be determined, so that the first object and the fifth object are independently monitored, and the multi-target dynamic monitoring process is further realized.
In an exemplary embodiment, a technical solution of how to determine the mapping relationship between the current frame image and the other frame images is further provided, and the specific steps include: under the condition that a third object exists in the current frame image and the other frame images, acquiring a first image characteristic of the third object in the current frame image and a second image characteristic of the third object in the other frame images; and calculating a transformation matrix between the first image feature and the second image feature according to a preset algorithm to obtain a mapping relation between the first image feature and the second image feature.
It should be noted that, the above-mentioned preset algorithm may be understood as a matching algorithm when performing feature matching, including, but not limited to, a hamming distance matching algorithm, a FLANN fast nearest neighbor matching algorithm, and a local matching algorithm in optical flow calculation.
The first image feature and the second image feature may be, for example, point features of an image, and extraction methods of the point features include, but are not limited to, SIFT (Scale-Invariant Feature Transform, scale invariant feature transform) method, SURF (Speeded Up Robust Features, accelerated robust feature) method, ORB (Oriented FAST and Rotated BRIEF) method, optical flow method, and the like.
By the above embodiment, by describing the process of the mapping relationship between the first image feature and the second image feature in detail, the reliability of calculating the action track of the first object in the target area can be improved, and the accuracy of judging whether the last track position of the action track is located in the alarm area is further improved.
In an exemplary embodiment, before the step of performing the step S204 to process the multi-frame image included in the image data to obtain the action track of the first object in the target area, the first object may further be determined from the multi-frame image, and specifically, the process of determining the first object from the multi-frame image includes: acquiring different objects identified in the multi-frame image; identifying the different objects according to the object characteristics of the different objects to obtain object types corresponding to the different objects; and acquiring a fourth object with the same object type in the different objects, and determining the fourth object as the first object under the condition that the identity of the fourth object is consistent with the identity type of the first object.
By the embodiment, the types of different objects in the multi-frame image can be obtained, and the first object is determined according to the object types corresponding to the different objects, so that the determination accuracy of determining the first object is greatly improved.
In an exemplary embodiment, further, in the process of acquiring the different objects identified in the multi-frame image, a first object area preset for each object may be determined for each object of the different objects, and a second object area of each object in the multi-frame image may be acquired; and determining the object feature corresponding to the first object area as the object feature of each object under the condition that the difference value between the first object area and the second object area is smaller than or equal to a preset value.
With the above embodiment, the object features of the different objects may be determined by the object areas of the different objects, for example, the different objects include a sixth object, and if it is determined that the first object area preset for the sixth object is 100×100, and if the obtained second object area of the sixth object is 90×90 and the difference between the first object area and the second object area is 1900, the object feature corresponding to the first object area may be determined as the object feature of the sixth object.
Further, the object features may also be determined in combination with the aspect ratios of the different objects, for example, in the case where the first object area of the sixth object is 100×100 and the aspect ratio of the sixth object is 0.5:1.5, the object feature corresponding to the first object area is determined as the object feature of the sixth object.
Alternatively, in other embodiments, the object type of the first object may also be determined directly by the object areas of different objects, for example, if it is determined that the difference between the first object area and the second object area is less than or equal to a preset value, the object type corresponding to the first object area may be determined directly as the object type of the first object.
In an exemplary embodiment, in order to better understand the process of determining whether the first object has an offence according to the last track position of the action track and the position distribution relationship of the alert area in the step S206, the following technical solution may be provided: determining that the first object has an offence under the condition that the last track position is located in the alarm area; and determining that the first object has no illegal action under the condition that the final track position is not located in the alarm area.
Further, the process of determining that the last track position of the action track is located in the alarm area is described according to the following technical scheme, which specifically includes: acquiring a preset number of continuous track points from all track points included in the final track position, and acquiring each track ray of each track point of the continuous track points on a preset angle; acquiring the number of intersection points of each track ray and the boundary line of the alarm region under the condition that the boundary line of each track ray and the alarm region are determined to have the intersection points; and under the condition that the number of the intersections of each track ray and the boundary line of the alarm region meets the preset condition, if the last track position of the action track is determined to be positioned in the alarm region, determining that the first object has the illegal action.
In one exemplary embodiment, after determining that the first object has a violation, the method further comprises: and sending alarm information to the first object under the condition that the final track position is positioned in the alarm area.
By the embodiment, the last track position of the action track can be determined to be positioned in the alarm area, so that the technical purpose of sending alarm information to the first object is achieved, and the safety of the first object can be improved.
In an exemplary embodiment, the following technical solution is provided to describe a process of determining that the last track position of the action track is not located in the alarm area, and specifically includes: acquiring a preset number of continuous track points from all track points included in the final track position, and acquiring any track ray of any track point of the continuous track points at the preset angle; and under the condition that the number of the intersections of any track ray and the boundary line of the alarm region does not meet the preset condition, determining that the last track position of the action track is not positioned in the alarm region, and determining that the first object has no illegal action.
Optionally, after determining that the last track position of the action track is not located in the alarm area, the coordinate position corresponding to any track point may be stored, and a preset number of other continuous track points are continuously obtained from all track points included in the last track position, and whether the last track position of the action track is located in the alarm area is determined according to the other continuous track points.
By the above embodiment, a scheme is provided for determining that the last track position of the action track is not located in the alarm area, and by acquiring a preset number of continuous track points for a plurality of times, it is determined that the last track position of the action track is not located in the alarm area, so that accuracy in sending alarm information to the first object when the last track position of the action track is determined to be located in the alarm area is improved from the side.
In the above embodiment, the preset condition may be, for example, an odd number, and then the number of intersections between each track ray and the boundary line of the alert area meets the preset condition, that is, the number of intersections between each track ray and the boundary line of the alert area is an odd number, and then it is determined that the last track position of the action track is located in the alert area. And the number of intersection points of any one of the trajectory rays and the boundary line of the alarm region does not satisfy the preset condition, that is, the number of intersection points of any one of the trajectory rays and the boundary line of the alarm region is not an odd number, for example, the number of intersection points of any one of the trajectory rays and the boundary line of the alarm region is an even number, then it is determined that the final trajectory position of the action trajectory is not located in the alarm region.
In this embodiment, a method for determining an offence is provided and applied to a server, and fig. 3 is a flowchart of a method for determining an offence according to another embodiment of the present application, where the flowchart includes the following steps:
step S302, a mobile crane is arranged on a guide rail (i.e. a track) to horizontally move, a camera is directly arranged on equipment to shoot downwards, a perimeter alarm area (i.e. the alarm area) is initially determined by a manual scribing or automatic line mode, and the relative position of the perimeter alarm area in a video picture is kept unchanged in the process that the camera moves along with the equipment;
Step S302 may also be implemented by: as shown in fig. 4, fig. 4 is a schematic view of a target area according to an alternative embodiment of the present application, a camera (i.e., the above-mentioned image capturing device) may be directly mounted on a mechanical device such as a mobile crane to capture images downward, a device 2 is mounted on a rail 1 to be freely movable along a horizontal moving direction on the rail 1, a camera 3 is mounted on the device 2, and the capturing direction is vertically aligned downward with a ground 4.
Further, after the video monitoring image acquired by the camera is acquired, the perimeter warning area of the video monitoring image is determined by means of manual scribing or automatic scribing. The manual scribing mode comprises, but is not limited to, a user setting a perimeter alarm area in a self-defining mode according to scene characteristics and actual requirements; the automatic scribing mode includes, but is not limited to, setting a perimeter alarm area by a user or a preset condition after classifying the image blocks. The perimeter alarm area can be arranged in the center of the image to represent a dangerous area below the equipment, and if an operator is in the dangerous area, the alarm is needed to be given to remind the operator.
Because the video pictures shot by the camera are continuously changed in the process that the camera continuously moves along with the movement of the crane, correspondingly, different video pictures have different perimeter alarm areas. In one embodiment, the perimeter alert area can be described with reference to fig. 5 (a) and 5 (b), and as shown in fig. 5 (a) and 5 (b), a dashed frame area formed by a perimeter area line indicated by a dashed line is a perimeter alert area, and in fig. 5 (a) and 5 (b), the relative position of the perimeter alert area in the video frame (i.e., the position and area of the perimeter area relative to the field of view of the camera) remains unchanged, but the actual corresponding world/physical coordinates are continuously changed, and meanwhile, there is a motion of the person below.
Step S304, obtaining video images in a certain time interval in the camera moving process, and respectively carrying out target detection on the images to obtain the information of the positions and the position frames corresponding to the targets in the images;
that is, after a multi-frame video image of a camera is acquired within a certain time interval in the moving process of the camera along with the equipment, the multi-frame video image can be recorded as a kth to a k+n-1 frame (n can be freely set), and the images are respectively subjected to target detection to obtain the corresponding positions and characteristic information of each target in the images, wherein k and n are natural numbers.
Among them, the object detection method includes, but is not limited to, a moving object detection method, an object detection network based on deep learning, and the like. The results obtained by target detection can be screened according to target types, target sizes and the like, for example, after screening, only the target types of interest can be reserved, the position information corresponding to each target can be obtained according to the target types of interest, and meanwhile, the target characteristic information (available for follow-up target tracking) and the image information can be saved, so that a data queue with the length of n and related to the target characteristic information is maintained.
Wherein, the video images within a certain time interval may be as shown in fig. 5 (a) and 5 (b), where fig. 5 (a) and 5 (b) are continuous frame images, fig. 5 (a) shows the corresponding image of "2022-04-13:52:00", and fig. 5 (b) shows "2022-04-13:52:01", and in fig. 5 (a), person a is crossing the track, and in fig. 5 (b), person a has crossed the track.
Step S306, obtaining the mapping relation between the current frame image and other frame images before the current frame, and obtaining the corresponding target positions of the targets (namely the first objects) of the other frame images before the current frame in the current frame image through coordinate conversion based on the mapping relation between different images;
the step S306 includes the following implementation steps:
and step 1, acquiring a current frame image, and recording the current frame image as a k+n frame.
And 2, obtaining the mapping relation between the current frame image and the k+n-1 frame image.
And step 3, obtaining the corresponding target positions of each target in the previous frame in the current frame through coordinate conversion based on the mapping relation among different images.
And obtaining the mapping relation between the k-th to k+n-1-th frame images and the k+n-th frame images in sequence through a matching algorithm. The matching algorithm includes, but is not limited to, a gray-scale based matching algorithm, a feature-based matching algorithm, and the like.
Taking a feature-based matching algorithm as an example, firstly extracting features of an image such as point features, wherein the point feature extraction method comprises, but is not limited to SIFT, SURF, ORB, an optical flow method and the like, screening can be performed according to a target detection result after the point features are obtained, and in the screening process, in order to reduce the external influence of a target in a motion state, the point features can be set as features of a static object, and feature points in a target frame and related feature points near the point features can be deleted; secondly, matching among characteristic points is carried out by utilizing a point matching algorithm, wherein the point matching algorithm comprises but is not limited to Hamming distance-based matching, FLANN rapid nearest neighbor matching, local matching in optical flow calculation and the like; and finally, screening out error matching points through a RANSAC algorithm, and solving a matrix change between the target position of the k+n-1 frame image and the target position of the k+n frame image, wherein the obtained homography matrix is the corresponding mapping relation of the two images.
And combining the mapping relation between the kth to the kth+n-1 frame images and the kth+n frame images in sequence, and obtaining the corresponding target position and target size of each target in the kth to the kth+n-1 frame images in the current frame of the kth+n frame images through coordinate conversion.
Alternatively, the foregoing embodiment may be illustrated by the flowchart of fig. 6, where fig. 6 is a schematic flowchart of a method for determining the offensiveness according to an embodiment of the present application, and specific steps are as follows:
step S602: extracting point characteristics of two images;
step S604: screening target detection results according to the target detection results and the interested target types, and eliminating relevant feature points in a target frame and near point features;
step S606: matching the feature points by using a point matching algorithm;
step S608: screening out error matching points by using a RANSAC algorithm to obtain a homography matrix;
step S610: and combining the mapping relation between the current frame image and other frame images, and obtaining the corresponding target positions of the targets in the other frame images before the current frame in the current frame image through coordinate conversion.
Step S308, performing object detection on the current frame image, inputting together with position information (namely the coordinate position) and size information (corresponding to the object area) of objects of other frame images before the current frame in the current frame image, performing data association, updating the motion trail of the objects, and performing multi-object tracking on a plurality of objects of the current frame image;
The method for performing data association includes, but is not limited to, SORT, DEEPSORT and the like. Taking DEEPSORT as an example, for a single target in a picture, correlating and predicting a track by taking a target position after coordinate conversion and a target position in a current frame as inputs, and predicting a next target position by using a position difference value between the target position after coordinate conversion and the target position in the current frame to obtain a predicted track. And carrying out IOU matching and feature matching on the target frames after coordinate conversion, and when matching is successful, effectively associating and matching the same target in different picture frames, thereby effectively reducing the problem that the same target cannot be associated between different pictures and the error association between different targets caused by video picture movement. When the matching is unsuccessful, new targets can be obtained, the new targets in different picture frames are associated and matched, further, each target in different picture frames is associated and matched, and the accurate tracking of multiple targets can be further realized.
The target position after the coordinate conversion may be a target position obtained after the coordinate conversion of the target of the single frame image, or may be a target position obtained after the coordinate conversion of the target of the multi-frame image, which is not limited in this application.
Step S310, judging whether the target triggers a perimeter alarm according to the perimeter alarm area and the motion trail of the target, and sending alarm information to the first object under the condition that the perimeter alarm is triggered, namely, the final trail position of the motion trail is located in the alarm area.
In step S310, the process of determining whether the target triggers the perimeter alarm according to the perimeter alarm area and the motion trail of the target is as follows: judging whether each track point in the motion track is in the perimeter alarm area or not by a judging method, wherein the judging method comprises, but is not limited to, an area and judging method, an included angle and judging method, a light projection method and the like.
Taking a ray projection method as an example, taking a ray from the motion track point, calculating the number of intersection points of the boundary line with the perimeter alarm region, wherein the point is inside the region if the number of the intersection points is odd, and the point is outside the region if the number of the intersection points is even. That is, for example, the number of intersections of the boundary line is 1, and it is understood that the ray is tangent to the boundary line, for example, and therefore, there is only one intersection. Alternatively, the number of intersection points of the boundary lines is 2, and for example, it can be understood that there are 2 intersection points when the ray enters the perimeter alarm region and then exits.
Then, the state judgment can be performed according to whether each track point is in the perimeter alarm area, if the track point corresponding to the image frame (such as the current frame and the latest three frames) is in the alarm area within a certain time interval, the alarm is performed, the target position is output, and if the track point is outside the alarm area, the corresponding target position information, the characteristic information and the image information are stored, and meanwhile, the circulation of the next frame is continued.
According to the embodiment, under the condition that a hoisting scene is hoisted, a camera is directly arranged on a moving crane device, shooting is carried out facing to the lower part of the crane device, the camera continuously moves along with the crane device, the relative position of a perimeter alarm area in a video picture is kept unchanged in the process that the camera moves along with the crane device, target detection is carried out on video images acquired in the camera moving process, the corresponding positions of all targets in the images are obtained, the position mapping relation between different images is obtained, coordinate conversion is carried out to obtain the positions of the targets in other frame images before the current frame image in the current frame image based on the position mapping relation between the different images, data association is carried out in combination with the positions of the targets in the other frame images in the current frame image, and the movement track of the targets is updated, so that whether the targets trigger perimeter alarm or not is judged according to the movement track of the perimeter alarm area and the targets, personnel illegal under the crane device can be alarmed in the process that the crane device moves, and dangerous events are reduced.
In order to better understand the process of the above method, the following description is further provided with reference to an optional embodiment, but is not limited to the technical solution of the embodiment of the present application.
In another exemplary embodiment, a process of determining whether each track point of the motion track exists in the alarm area will be described with reference to fig. 7, and fig. 7 is a flowchart of a method for determining the offensiveness according to still another embodiment of the present application, as shown in fig. 7, and the specific steps are as follows:
step S702: judging whether each track point of the motion track is in a perimeter alarm area or not;
step S704: further determining whether the track point corresponding to the image in a certain time interval is in the perimeter alarm area; if yes, step S706 is executed, and if no, step S70 is executed.
Step S706: and alarming and outputting the target position.
Step S708: and updating the motion trail and continuing to process the next frame of image.
Through the embodiment, the perimeter alarm technology can be realized, so that personnel in an area can be prevented from entering and exiting at will, and the perimeter alarm technology can play an important role in dangerous places such as railways, high speeds, airports and the like and important places such as institutions, protection areas and the like. And by introducing a computer vision method into intelligent monitoring, a mapping relation is established between the image and the image description, so that a computer can analyze and understand the content in a video picture. By custom setting an alarm area or a wire mixing in a video picture and combining related technologies such as target identification, target detection and target tracking, when a target enters the area or passes through the wire mixing, an alarm is automatically generated, and compared with a mode of personnel supervision, the method is safer and more effective, and consumes less manpower.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method of the embodiments of the present application.
FIG. 8 is a block diagram of a device for determining violations in accordance with an embodiment of the present application; as shown in fig. 8, includes:
an acquiring module 802, configured to acquire image data acquired by an image acquisition device for a target area within a preset period of time, where the target area includes an area below the target device, and the image acquisition device is disposed at the bottom of the target device, so as to acquire an image of the target area;
It should be noted that, the target device may be mounted on a guide rail, and the target device may be horizontally moved on the guide rail. During the horizontal movement of the target device, the image capturing device mounted at the bottom of the target device also moves horizontally therewith.
It should be noted that the above-mentioned preset time period may be preset manually, which is not limited in this application. Further, the image data collected in the preset time period may include multiple frames of images at successive moments.
An obtaining module 804, configured to process a multi-frame image included in the image data, and obtain an alarm region corresponding to the multi-frame image and a movement track of the first object in the target region;
and a sending module 806, configured to determine whether the first object has an offence according to the last track position of the action track and the position distribution relationship of the alert area, where the offence represents an abnormal behavior associated with the target area.
Through the device, the image data which are acquired by the image acquisition equipment for the target area in the preset time period are acquired, wherein the target area comprises an area below the target equipment, and the image acquisition equipment is arranged at the bottom of the target equipment so as to acquire the image of the target area; processing a plurality of frames of images included in the image data to obtain an alarm area corresponding to the plurality of frames of images and a movement track of a first object in the target area; determining whether the first object has an illegal action according to the final track position of the action track and the position distribution relation of the alarm area, wherein the illegal action represents the abnormal action associated with the target area; by adopting the technical scheme, the technical problem that the illegal behaviors of the operators cannot be accurately identified due to the fact that the shooting area is fixed under the condition that the hoisting mechanical equipment moves along the track is solved, the mobile monitoring of the operators is realized, and the technical effect of improving the identification accuracy of the illegal behaviors of the operators is achieved.
It should be noted that, the last track position of the action track may include, but is not limited to, a track point corresponding to one or more frames. For example, when the last track position of the action track includes a track point corresponding to one frame, the track point corresponding to the current frame image may be determined as the last track position of the action track, or the track point corresponding to the last frame image in the multi-frame image may be determined as the last track position of the action track. And in the case that the last track position of the action track includes track points corresponding to multiple frames, the last track position of the action track can be determined according to track points corresponding to the current frame image and other frame images before the current frame image, or the last track position of the action track can be determined according to track points corresponding to the last frame image in multiple frame images and images before the last frame image.
The present application is not limited to this, and it is possible to define that the current frame image and other frame images preceding the current frame image are consecutive frames, or define that the last frame image and images preceding the last frame image are consecutive frames.
In an exemplary embodiment, the obtaining module 804 is further configured to: dividing each frame of image based on the geometric center of each frame of image aiming at each frame of image of the multi-frame image to obtain divided multi-frame images, wherein each frame of image of the divided multi-frame images is provided with a plurality of dividing areas; transmitting a calibration prompt message containing the divided multi-frame images to a second object, and receiving feedback information transmitted by the second object in response to the calibration prompt message, wherein the feedback information comprises a result of calibrating region types of a plurality of division regions of each frame of image of the divided multi-frame images by the second object, and the region types comprise: an alert region and a non-alert region; and determining the dividing region with the region type indicated in the feedback information as an alarm region corresponding to the multi-frame image.
That is, through the above embodiment, the multi-frame image may be divided to obtain the region types of the multiple division regions of each frame image, the alarm region corresponding to the multi-frame image may be determined according to the division region with the region type as the alarm region, the multi-frame image alarm region may be dynamically determined, a foundation is laid for a subsequent process of determining whether the action track of the first object in the target region is located in the alarm region, and thus, a preparation condition for sending alarm information is provided, and the executable performance for sending alarm information is improved.
In an exemplary embodiment, the obtaining module 804 is further configured to: acquiring a current frame image and other frame images before the current frame image in the multi-frame images, and determining a mapping relation between the current frame image and the other frame images; wherein the mapping relationship represents a mathematical relationship required when converting the coordinate position of the first object in the other frame image into the coordinate position on the coordinate system of the current frame image; acquiring a first coordinate position of the first object in the other frame images, and determining a second coordinate position corresponding to the first coordinate position on a coordinate system of the current frame image according to the mapping relation; acquiring a third coordinate position of the first object on a coordinate system of the current frame image; and determining the action track according to the second coordinate positions and the third coordinate positions.
Further, the obtaining module 804 is further configured to: determining the sum of the following parameters: the third coordinate position, the second coordinate position and the position difference of the third coordinate position; determining a fourth coordinate position on the coordinate system of the current frame image according to the sum value, wherein the fourth coordinate position is the position of a fifth object on the coordinate system of the current frame image; determining a first object feature when the fifth object is positioned at the fourth coordinate position, a second object feature corresponding to the first object when the first object is positioned at the second coordinate position, and a third object feature corresponding to the first object when the first object is positioned at the third coordinate position; and determining that the fifth object is consistent with the first object under the condition that the feature overlap ratio between the first object feature and the second object feature is larger than a preset value and the feature overlap ratio between the first object feature and the third object feature is larger than the preset value, and determining the action track of the first object in the current frame image according to the second coordinate position, the third coordinate position and the fourth coordinate position.
The obtaining module 804 is further configured to: determining a plurality of position differences of the second coordinate positions and the third coordinate positions by: determining second coordinate positions corresponding to first frame images of a plurality of frames of other frame images from the second coordinate positions, determining second coordinate positions corresponding to tail frame images of the other frame images, and determining position offset values between the second coordinate positions corresponding to the first frame images of the other frame images and the second coordinate positions corresponding to the tail frame images of the other frame images; and acquiring the frame numbers of the other frame images, determining the ratio of the position offset to the frame numbers as a target second coordinate position, and determining the position difference value of the target second coordinate position and the third coordinate position as the position difference value of the second coordinate positions and the third coordinate position.
Through the embodiment, the action track of the first object in the target area is determined through the second coordinate position of the first object corresponding to the coordinate system of the current frame image and the third coordinate position of the first object on the coordinate system of the current frame image according to the mapping relation between the current frame image and the other frame images, the position change relation of the first object in the multi-frame image can be shown in a dynamic mode through the action track, the technical purpose of dynamically monitoring the first object is achieved, the probability that the first object cannot be timely warned due to the fact that the first object cannot be monitored is reduced, and the safety of the first object is improved.
Optionally, in other embodiments, when it is determined that the feature overlap ratio between the first object feature and the second object feature is smaller than a preset value, and any one of the feature overlap ratios between the first object feature and the third object feature is smaller than the preset value, it is determined that the fifth object is inconsistent with the first object, and an action track of the fifth object in the current frame image is determined based on the fourth coordinate position.
Through the embodiment, under the condition that the fifth object is inconsistent with the first object, the action track of the fifth object in the current frame image can be determined, so that the first object and the fifth object are independently monitored, and the multi-target dynamic monitoring process is further realized.
In an exemplary embodiment, the obtaining module 804 is further configured to: under the condition that a third object exists in the current frame image and the other frame images, acquiring a first image characteristic of the third object in the current frame image and a second image characteristic of the third object in the other frame images; and calculating a transformation matrix between the first image feature and the second image feature according to a preset algorithm to obtain a mapping relation between the first image feature and the second image feature.
It should be noted that, the above-mentioned preset algorithm may be understood as a matching algorithm when performing feature matching, including, but not limited to, a hamming distance matching algorithm, a FLANN fast nearest neighbor matching algorithm, and a local matching algorithm in optical flow calculation.
The first image feature and the second image feature may be, for example, point features of an image, and extraction methods of the point features include, but are not limited to, SIFT (Scale-Invariant Feature Transform, scale invariant feature transform) method, SURF (Speeded Up Robust Features, accelerated robust feature) method, ORB (Oriented FAST and Rotated BRIEF) method, optical flow method, and the like.
By the above embodiment, by describing the process of the mapping relationship between the first image feature and the second image feature in detail, the reliability of calculating the action track of the first object in the target area can be improved, and the accuracy of judging whether the last track position of the action track is located in the alarm area is further improved.
In an exemplary embodiment, the obtaining module 804 is further configured to: determining the first object from the multi-frame image, wherein the determining the first object from the multi-frame image includes: acquiring different objects identified in the multi-frame image; identifying the different objects according to the object characteristics of the different objects to obtain object types corresponding to the different objects; and acquiring a fourth object with the same object type in the different objects, and determining the fourth object as the first object under the condition that the identity of the fourth object is consistent with the identity type of the first object.
By the embodiment, the types of different objects in the multi-frame image can be obtained, and the first object is determined according to the object types corresponding to the different objects, so that the determination accuracy of determining the first object is greatly improved.
In an exemplary embodiment, the obtaining module 804 is further configured to: for each object of the different objects, determining a first object area preset for each object, and acquiring a second object area of each object in the multi-frame image; and determining the object feature corresponding to the first object area as the object feature of each object under the condition that the difference value between the first object area and the second object area is smaller than or equal to a preset value.
With the above embodiment, the object features of the different objects may be determined by the object areas of the different objects, for example, the different objects include a sixth object, and if it is determined that the first object area preset for the sixth object is 100×100, and if the obtained second object area of the sixth object is 90×90 and the difference between the first object area and the second object area is 1900, the object feature corresponding to the first object area may be determined as the object feature of the sixth object.
Further, the object features may also be determined in combination with the aspect ratios of the different objects, for example, in the case where the first object area of the sixth object is 100×100 and the aspect ratio of the sixth object is 0.5:1.5, the object feature corresponding to the first object area is determined as the object feature of the sixth object.
Alternatively, in other embodiments, the object type of the first object may also be determined directly by the object areas of different objects, for example, if it is determined that the difference between the first object area and the second object area is less than or equal to a preset value, the object type corresponding to the first object area may be determined directly as the object type of the first object.
In an exemplary embodiment, the sending module 806 is further configured to: acquiring a preset number of continuous track points from all track points included in the final track position, and acquiring each track ray of each track point of the continuous track points on a preset angle; acquiring the number of intersection points of each track ray and the boundary line of the alarm region under the condition that the boundary line of each track ray and the alarm region are determined to have the intersection points; and under the condition that the number of the intersections of each track ray and the boundary line of the alarm region meets the preset condition, if the last track position of the action track is determined to be positioned in the alarm region, determining that the first object has the illegal action.
Further, the sending module 806 is further configured to: after determining that the first object has an offence, the method further comprises: and sending alarm information to the first object under the condition that the final track position is positioned in the alarm area.
By the embodiment, the last track position of the action track can be determined to be positioned in the alarm area, so that the technical purpose of sending alarm information to the first object is achieved, and the safety of the first object can be improved.
In an exemplary embodiment, the sending module 806 is further configured to: acquiring a preset number of continuous track points from all track points included in the final track position, and acquiring any track ray of any track point of the continuous track points at the preset angle; and under the condition that the number of the intersections of any track ray and the boundary line of the alarm region does not meet the preset condition, determining that the last track position of the action track is not positioned in the alarm region, and determining that the first object has no illegal action.
Optionally, after determining that the last track position of the action track is not located in the alarm area, the coordinate position corresponding to any track point may be stored, and a preset number of other continuous track points are continuously obtained from all track points included in the last track position, and whether the last track position of the action track is located in the alarm area is determined according to the other continuous track points.
By the above embodiment, a scheme is provided for determining that the last track position of the action track is not located in the alarm area, and by acquiring a preset number of continuous track points for a plurality of times, it is determined that the last track position of the action track is not located in the alarm area, so that accuracy in sending alarm information to the first object when the last track position of the action track is determined to be located in the alarm area is improved from the side.
In the above embodiment, the preset condition may be, for example, an odd number, and then the number of intersections between each track ray and the boundary line of the alert area meets the preset condition, that is, the number of intersections between each track ray and the boundary line of the alert area is an odd number, and then it is determined that the last track position of the action track is located in the alert area. And the number of intersection points of any one of the trajectory rays and the boundary line of the alarm region does not satisfy the preset condition, that is, the number of intersection points of any one of the trajectory rays and the boundary line of the alarm region is not an odd number, for example, the number of intersection points of any one of the trajectory rays and the boundary line of the alarm region is an even number, then it is determined that the final trajectory position of the action trajectory is not located in the alarm region.
Embodiments of the present application also provide a storage medium including a stored program, wherein the program performs the method of any one of the above when run.
Alternatively, in the present embodiment, the above-described storage medium may be configured to store program code for performing the steps of:
s11, acquiring image data acquired by an image acquisition device for a target area in a preset time period, wherein the target area comprises an area below the target device, and the image acquisition device is arranged at the bottom of the target device so as to acquire images of the target area;
s12, processing a plurality of frames of images included in the image data to obtain an alarm area corresponding to the plurality of frames of images and a movement track of a first object in the target area;
s13, determining whether the first object has illegal behaviors according to the final track position of the action track and the position distribution relation of the alarm area, wherein the illegal behaviors represent abnormal behaviors associated with the target area.
Embodiments of the present application also provide an electronic device comprising a memory having a computer program stored therein and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
Optionally, the electronic apparatus may further include a transmission device and an input/output device, where the transmission device is connected to the processor, and the input/output device is connected to the processor.
Alternatively, in the present embodiment, the above-described processor may be configured to execute the following steps by a computer program:
s11, acquiring image data acquired by an image acquisition device for a target area in a preset time period, wherein the target area comprises an area below the target device, and the image acquisition device is arranged at the bottom of the target device so as to acquire images of the target area;
s12, processing a plurality of frames of images included in the image data to obtain an alarm area corresponding to the plurality of frames of images and a movement track of a first object in the target area;
s13, determining whether the first object has illegal behaviors according to the final track position of the action track and the position distribution relation of the alarm area, wherein the illegal behaviors represent abnormal behaviors associated with the target area.
Alternatively, in the present embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Alternatively, specific examples in this embodiment may refer to examples described in the foregoing embodiments and optional implementations, and this embodiment is not described herein.
It will be appreciated by those skilled in the art that the modules or steps of the application described above may be implemented in a general purpose computing device, they may be centralized on a single computing device, or distributed across a network of computing devices, or they may alternatively be implemented in program code executable by computing devices, such that they may be stored in a memory device for execution by the computing devices and, in some cases, the steps shown or described may be performed in a different order than what is shown or described, or they may be implemented as individual integrated circuit modules, or as individual integrated circuit modules. Thus, the present application is not limited to any specific combination of hardware and software.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the same, but rather, various modifications and variations may be made by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the principles of the present application should be included in the protection scope of the present application.

Claims (14)

1. A method for determining an offence, comprising:
acquiring image data acquired by an image acquisition device aiming at a target area in a preset time period, wherein the target area comprises an area below the target device, and the image acquisition device is arranged at the bottom of the target device so as to acquire images of the target area;
processing a plurality of frames of images included in the image data to obtain an alarm area corresponding to the plurality of frames of images and a movement track of a first object in the target area;
and determining whether the first object has an illegal action according to the final track position of the action track and the position distribution relation of the alarm area, wherein the illegal action represents the abnormal action associated with the target area.
2. The method for determining an offence according to claim 1, wherein processing a plurality of frame images included in the image data to obtain an alarm region corresponding to the plurality of frame images, comprises:
dividing each frame of image based on the geometric center of each frame of image aiming at each frame of image of the multi-frame image to obtain divided multi-frame images, wherein each frame of image of the divided multi-frame images is provided with a plurality of dividing areas;
Transmitting a calibration prompt message containing the divided multi-frame images to a second object, and receiving feedback information transmitted by the second object in response to the calibration prompt message, wherein the feedback information comprises a result of calibrating region types of a plurality of division regions of each frame of image of the divided multi-frame images by the second object, and the region types comprise: an alert region and a non-alert region;
and determining the dividing region with the region type indicated in the feedback information as an alarm region corresponding to the multi-frame image.
3. The method for determining an offence according to claim 1, wherein processing the multi-frame image included in the image data to obtain a movement trace of the first object in the target area includes:
acquiring a current frame image and other frame images before the current frame image in the multi-frame images, and determining a mapping relation between the current frame image and the other frame images; wherein the mapping relationship represents a mathematical relationship required when converting the coordinate position of the first object in the other frame image into the coordinate position on the coordinate system of the current frame image;
Acquiring a first coordinate position of the first object in the other frame images, and determining a second coordinate position corresponding to the first coordinate position on a coordinate system of the current frame image according to the mapping relation;
acquiring a third coordinate position of the first object on a coordinate system of the current frame image; and determining the action track according to the second coordinate positions and the third coordinate positions.
4. The method for determining an offence according to claim 3, wherein determining the action trajectory from a plurality of the second coordinate positions and the third coordinate positions includes:
determining the sum of the following parameters: the third coordinate position, a plurality of position differences of the second coordinate positions and the third coordinate position;
determining a fourth coordinate position on the coordinate system of the current frame image according to the sum value, wherein the fourth coordinate position is the position of a fifth object on the coordinate system of the current frame image;
determining a first object feature when the fifth object is positioned at the fourth coordinate position, a second object feature corresponding to the first object when the first object is positioned at the second coordinate position, and a third object feature corresponding to the first object when the first object is positioned at the third coordinate position;
And determining that the fifth object is consistent with the first object under the condition that the feature overlap ratio between the first object feature and the second object feature is larger than a preset value and the feature overlap ratio between the first object feature and the third object feature is larger than the preset value, and determining the action track of the first object in the current frame image according to the second coordinate position, the third coordinate position and the fourth coordinate position.
5. A method of determining an offence according to claim 3, characterized in that the position difference values of a plurality of the second coordinate positions and the third coordinate positions are determined by:
determining second coordinate positions corresponding to first frame images of a plurality of frames of other frame images from the second coordinate positions, determining second coordinate positions corresponding to tail frame images of the other frame images, and determining position offset values between the second coordinate positions corresponding to the first frame images of the other frame images and the second coordinate positions corresponding to the tail frame images of the other frame images;
and acquiring the frame numbers of the other frame images, determining the ratio of the position offset to the frame numbers as a target second coordinate position, and determining the position difference value of the target second coordinate position and the third coordinate position as the position difference value of the second coordinate positions and the third coordinate position.
6. A method of determining an offence according to claim 3, wherein determining a mapping relationship between the current frame image and the other frame images includes:
under the condition that a third object exists in the current frame image and the other frame images, acquiring a first image characteristic of the third object in the current frame image and a second image characteristic of the third object in the other frame images;
and calculating a transformation matrix between the first image feature and the second image feature according to a preset algorithm to obtain a mapping relation between the first image feature and the second image feature.
7. The method according to claim 1, characterized in that before processing the multi-frame image included in the image data to obtain the action track of the first object in the target area, the method further comprises:
determining the first object from the multi-frame image, wherein the determining the first object from the multi-frame image includes:
acquiring different objects identified in the multi-frame image;
identifying the different objects according to the object characteristics of the different objects to obtain object types corresponding to the different objects;
And acquiring a fourth object with the same object type in the different objects, and determining the fourth object as the first object under the condition that the identity of the fourth object is consistent with the identity type of the first object.
8. The method of determining an offence according to claim 7, characterized in that in acquiring different objects identified in the multi-frame image, the method further comprises:
for each object of the different objects, determining a first object area preset for each object, and acquiring a second object area of each object in the multi-frame image; and determining the object feature corresponding to the first object area as the object feature of each object under the condition that the difference value between the first object area and the second object area is smaller than or equal to a preset value.
9. The method according to claim 1, wherein determining whether the first object has an offence according to a positional distribution relation between a last track position of the action track and the alert area, comprises:
acquiring a preset number of continuous track points from all track points included in the final track position, and acquiring each track ray of each track point of the continuous track points on a preset angle; acquiring the number of intersection points of each track ray and the boundary line of the alarm region under the condition that the boundary line of each track ray and the alarm region are determined to have the intersection points; and under the condition that the number of the intersections of each track ray and the boundary line of the alarm region meets the preset condition, if the last track position of the action track is determined to be positioned in the alarm region, determining that the first object has the illegal action.
10. The method according to claim 1, wherein determining whether the first object has an offence according to a positional distribution relation between a last track position of the action track and the alert area, comprises:
acquiring a preset number of continuous track points from all track points included in the final track position, and acquiring any track ray of any track point of the continuous track points at the preset angle;
and under the condition that the number of the intersections of any track ray and the boundary line of the alarm region does not meet the preset condition, determining that the last track position of the action track is not positioned in the alarm region, and determining that the first object has no illegal action.
11. The method of claim 9, further comprising, after determining that the first object has an offence:
and sending alarm information to the first object under the condition that the final track position is positioned in the alarm area.
12. A device for determining an offence, comprising:
the acquisition module is used for acquiring image data acquired by the image acquisition equipment aiming at a target area in a preset time period, wherein the image acquisition equipment is arranged at the bottom of the target equipment so as to acquire images of the area below the target equipment;
The obtaining module is used for processing the multi-frame images included in the image data to obtain an alarm area corresponding to the multi-frame images and a movement track of the first object in the target area; and the sending module is used for determining whether the first object has illegal behaviors according to the final track position of the action track and the position distribution relation of the alarm area, wherein the illegal behaviors represent the abnormal behaviors associated with the target area.
13. A computer readable storage medium, characterized in that the computer readable storage medium comprises a stored program, wherein the program when run performs the method of any of the preceding claims 1 to 11.
14. An electronic device comprising a memory and a processor, characterized in that the memory has stored therein a computer program, the processor being arranged to execute the method according to any of the claims 1 to 11 by means of the computer program.
CN202211656957.2A 2022-12-22 2022-12-22 Method and device for determining illegal behaviors, storage medium and electronic device Pending CN116012939A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211656957.2A CN116012939A (en) 2022-12-22 2022-12-22 Method and device for determining illegal behaviors, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211656957.2A CN116012939A (en) 2022-12-22 2022-12-22 Method and device for determining illegal behaviors, storage medium and electronic device

Publications (1)

Publication Number Publication Date
CN116012939A true CN116012939A (en) 2023-04-25

Family

ID=86036706

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211656957.2A Pending CN116012939A (en) 2022-12-22 2022-12-22 Method and device for determining illegal behaviors, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN116012939A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117557201A (en) * 2024-01-12 2024-02-13 国网山东省电力公司菏泽供电公司 Intelligent warehouse safety management system and method based on artificial intelligence

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117557201A (en) * 2024-01-12 2024-02-13 国网山东省电力公司菏泽供电公司 Intelligent warehouse safety management system and method based on artificial intelligence
CN117557201B (en) * 2024-01-12 2024-04-12 国网山东省电力公司菏泽供电公司 Intelligent warehouse safety management system and method based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN110650316A (en) Intelligent patrol and early warning processing method and device, electronic equipment and storage medium
CN108983806B (en) Method and system for generating area detection and air route planning data and aircraft
CN102842211B (en) Monitoring and early warning system and monitoring and early warning method for prevention of external force of transmission line based on image recognition
CN110769195B (en) Intelligent monitoring and recognizing system for violation of regulations on power transmission line construction site
CN109409238B (en) Obstacle detection method and device and terminal equipment
CN111310947A (en) Building facility operation and maintenance method, equipment, storage medium and system based on 5G
CN104966304A (en) Kalman filtering and nonparametric background model-based multi-target detection tracking method
CN110659391A (en) Video detection method and device
US20230005176A1 (en) Throwing position acquisition method and apparatus, computer device and storage medium
CN110285801B (en) Positioning method and device for intelligent safety helmet
CN109996182B (en) Positioning method, device and system based on combination of UWB positioning and monitoring
Bedruz et al. Real-time vehicle detection and tracking using a mean-shift based blob analysis and tracking approach
CN107992819A (en) A kind of definite method and apparatus of vehicle attribute structured features
CN112434566B (en) Passenger flow statistics method and device, electronic equipment and storage medium
CN112329691A (en) Monitoring video analysis method and device, electronic equipment and storage medium
CN113963475A (en) Transformer substation personnel safety management method and system
CN111753612A (en) Method and device for detecting sprinkled object and storage medium
CN116012939A (en) Method and device for determining illegal behaviors, storage medium and electronic device
CN113421044A (en) Dangerous waste transportation monitoring method and device based on Internet of things and computer equipment
CN112270253A (en) High-altitude parabolic detection method and device
CN112282819B (en) Comprehensive mining working face personnel target safety monitoring method and system based on vision
CN112464755A (en) Monitoring method and device, electronic equipment and storage medium
CN114040094A (en) Method and equipment for adjusting preset position based on pan-tilt camera
CN111652128B (en) High-altitude power operation safety monitoring method, system and storage device
CN115861236A (en) Method and device for determining dripping event, storage medium and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination