CN113114994A - Behavior sensing method, device and equipment - Google Patents
Behavior sensing method, device and equipment Download PDFInfo
- Publication number
- CN113114994A CN113114994A CN202110378299.4A CN202110378299A CN113114994A CN 113114994 A CN113114994 A CN 113114994A CN 202110378299 A CN202110378299 A CN 202110378299A CN 113114994 A CN113114994 A CN 113114994A
- Authority
- CN
- China
- Prior art keywords
- wearer
- data
- image
- behavior
- acquired
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application belongs to the technical field of construction, and provides a behavior sensing method, a behavior sensing device and behavior sensing equipment. The method comprises the following steps: acquiring image data acquired by image sensing equipment arranged on a safety helmet and displacement data acquired by an inertial sensor arranged on the safety helmet; generating a bone posture of the wearer and a three-dimensional scene image around the wearer according to the acquired image data; determining trajectory data of the wearer from the displacement data; the behavior of the wearer is sensed according to the bone posture, the three-dimensional scene image and the trajectory data, and the behavior of the wearer can be sensed according to the determined bone posture of the wearer, the three-dimensional scene image around the wearer and the trajectory data of the wearer, so that the behavior of the wearer can be sensed more accurately and effectively, and more accurate behavior sensing data can be obtained.
Description
Technical Field
The application belongs to the technical field of construction, and particularly relates to a behavior sensing method, a behavior sensing device and behavior sensing equipment.
Background
In the construction process of the building engineering, workers usually need to perform construction operation according to a preset operation rule. However, due to reasons such as insufficient supervision, the construction operation may not be performed according to the established construction rules, and thus safety accidents in the construction process may occur, including property loss or personal injury loss, which affect the construction progress of the project.
In order to improve the normative of the construction operation of workers, the behavior of the workers needs to be monitored in the construction process of the workers. For example, the behavior of a worker can be perceived through a camera arranged in a construction scene, so that the occurrence of non-standard behaviors is reduced. However, due to the limitation of monitoring images, the current worker behavior perception mode cannot comprehensively perceive the behavior of the worker, and is not beneficial to acquiring accurate behavior perception data.
Disclosure of Invention
In view of this, embodiments of the present application provide a behavior sensing method, apparatus, and device, so as to solve the problem that when sensing a behavior of a worker in the prior art, due to the limitation of a monitoring image, it is not beneficial to comprehensively sense the behavior of the worker, and it is not beneficial to obtain accurate behavior sensing data.
A first aspect of an embodiment of the present application provides a behavior awareness method, including:
acquiring image data acquired by image sensing equipment arranged on a safety helmet and displacement data acquired by an inertial sensor arranged on the safety helmet;
generating a bone posture of the wearer and a three-dimensional scene image around the wearer according to the acquired image data;
determining trajectory data of the wearer from the displacement data;
and sensing the wearer behavior according to the bone posture, the three-dimensional scene image and the track data.
With reference to the first aspect, in a first possible implementation manner of the first aspect, the image sensing device includes a first camera and a second camera, and acquiring image data acquired by the image sensing device disposed on a helmet includes:
acquiring a first-person visual image of a wearer, which is acquired by a first camera arranged on a safety helmet;
a third person vision image of the wearer captured by a second camera disposed on the headgear is acquired.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner of the first aspect, generating a skeletal pose of the wearer from the acquired image data includes:
identifying bone key points of the wearer according to the third person named visual image of the wearer acquired by the second camera;
determining a skeletal pose of the wearer from the identified skeletal keypoint information of the wearer.
With reference to the first possible implementation manner of the first aspect, in a third possible implementation manner of the first aspect, generating a three-dimensional scene image around a wearer according to acquired image data includes:
and acquiring a three-dimensional scene image around the wearer by a synchronous positioning and mapping method according to the first person visual image of the wearer acquired by the first camera.
With reference to the third possible implementation manner of the first aspect, in a fourth possible implementation manner of the first aspect, the determining trajectory data of the wearer according to the displacement data includes:
generating the wearing track data according to the displacement data acquired by the inertial sensor;
and correcting the track data according to the position of the first person vision image acquired by the first camera in the three-dimensional panoramic image.
With reference to the first possible implementation manner of the first aspect, in a fifth possible implementation manner of the first aspect, after perceiving the wearer behavior according to the bone pose, the three-dimensional scene image, and the trajectory data, the method further includes:
and when the abnormal behavior of the wearer is detected, inputting the perceived behavior of the wearer into a preset behavior analysis network model to obtain an analysis result of the perceived behavior.
A second aspect of an embodiment of the present application provides a behavior awareness apparatus, including:
the data acquisition unit is used for acquiring image data acquired by image sensing equipment arranged on the safety helmet and displacement data acquired by an inertial sensor arranged on the safety helmet;
the image data processing unit is used for generating a bone posture of the wearer and a three-dimensional scene image around the wearer according to the acquired image data;
a trajectory data generation unit for determining trajectory data of the wearer according to the displacement data;
and the behavior sensing unit is used for sensing the behavior of the wearer according to the bone posture, the three-dimensional scene image and the track data.
A third aspect of the embodiments of the present application provides a behavior awareness helmet, where the helmet includes a first camera for acquiring a first person visual image of a wearer, a second camera for acquiring a third person visual image of the wearer, an inertial sensor for acquiring displacement data of the wearer, and a communication module for sending data acquired by the first camera, the second camera, and the inertial sensor to a server so that the server performs behavior awareness analysis according to the acquired data.
A fourth aspect of embodiments of the present application provides a behavior awareness apparatus, comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the method according to any one of the first aspect when executing the computer program.
A fifth aspect of embodiments of the present application provides a computer-readable storage medium, in which a computer program is stored, which, when executed by a processor, implements the steps of the method according to any one of the first aspects.
Compared with the prior art, the embodiment of the application has the advantages that: this application is through setting up image sensing equipment collection image data on the safety helmet to and gather displacement data through the inertial sensor who sets up on the safety helmet, confirm the skeleton gesture of the wearer of safety helmet according to the image data who gathers, and the three-dimensional scene image of wearer, confirm the trajectory data of wearer according to the displacement data that gathers, through the three-dimensional scene image that obtains, trajectory data and skeleton gesture, thereby can be more accurate effectual perception wearer's action, be favorable to obtaining more accurate action perception data.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic structural diagram of a safety helmet provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of an implementation of a behavior sensing method provided in an embodiment of the present application;
fig. 3 is a schematic diagram of a behavior sensing apparatus according to an embodiment of the present application;
fig. 4 is a schematic diagram of a behavior awareness apparatus provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
In the construction process of workers, the method has a positive effect of sensing the behaviors of the workers. For example, the working habits of workers can be analyzed through behavior perception, and the wrong behaviors and high-risk behaviors can be objectively recorded, so that targeted skill education and training can be performed. Or, the work load of the worker can be objectively and quantitatively evaluated according to the sensing of the behavior of the worker, and an effective calculation basis is provided for production performance evaluation. Or, the construction progress can be tracked in real time by sensing in real time according to the construction environment. Or the accident reason can be determined through behavior perception data through the image recording construction process, and responsibility division and definition are facilitated.
However, in the current behavior sensing method, the behavior sensing is generally performed by acquiring an image through a camera on a construction site. Because the monitoring dead angle can appear in the image that the camera was gathered, be unfavorable for comprehensively effectual to the workman accomplish the action perception, be unfavorable for acquireing accurate action perception data.
In order to overcome the above problems, an embodiment of the present invention provides a safety helmet for sensing behaviors, and fig. 1 is a schematic structural diagram of the safety helmet according to the embodiment of the present invention. On the basis of a common helmet, the helmet described herein further comprises an image sensing device 101 and an inertial sensor 102. Wherein the image sensing device 101 and the inertial sensor 102 may be arranged at the brim of a helmet. For example, in a possible implementation, the image sensing device 101 may include a first camera and a second camera, where the first camera may be used to capture a first person-vision image of the wearer, and the second camera may capture a third person-vision image of the wearer. First camera and second camera can be wide angle camera, for example can be the fisheye camera. First camera can install in the safety helmet preceding edge and gather the direction for lower direction forward, and the second camera can install at the safety helmet border and towards the direction in place ahead. The inertial sensor can be arranged on the rear edge or other positions of the safety helmet and can be used for detecting the data of the moving speed, the acceleration, the moving direction and the like of the wearer, so that the trajectory data of the wearer can be determined according to the detected moving information.
The safety helmet further comprises a communication module which can be used for sending the collected data to a server, so that the server can analyze and process the collected data, determine the wearing bone posture, the three-dimensional scene image around the wearer, the moving track of the wearer and the like, and sense the behavior of the wearer according to the determined data.
Fig. 2 is a schematic flow chart of an implementation process of a behavior sensing method provided in an embodiment of the present application, and as shown in fig. 2, the method includes:
in S201, image data acquired by an image sensing device provided on a helmet and displacement data acquired by an inertial sensor provided on the helmet are acquired.
The image sensing device in the embodiment of the present application may be a camera. The number of cameras included in the image sensing apparatus may be one or more. Through setting up the camera of a large amount, can be more comprehensive obtain the action data of the person of wearing to and the three-dimensional scene data around the person of wearing. Wherein, the behavior data of the wearer can be represented by the skeletal posture of the wearer.
In a possible implementation, the image data can be acquired by a wide-angle camera. For example, the camera may be a fisheye camera. Two cameras can be set, including a first camera and a second camera, and image data acquisition is carried out. The first camera can acquire a first person visual image of a wearer of the helmet and can be used for acquiring posture data of the wearer, such as a bone posture and the like. The second camera can acquire a third person called visual image of the wearer of the safety helmet and can be used for acquiring a three-dimensional scene image around the wearer.
The inertial sensor can be a nine-axis sensor and can be used for acquiring data of space acceleration, space angular velocity, space magnetic field intensity and the like of a wearer. The data collected by the inertial sensor can facilitate calculation of movement data of the wearer, so that trajectory information of the wearer can be determined according to the movement data.
In the embodiment of the application, in order to effectively perform behavior perception on a wearer, whether the user is in a wearing state or not can be perceived through the wearing state perception module. When the helmet is in a wearing state, the acquisition of the image data and the displacement data is started.
In a possible implementation, the correspondence of the helmet to the worker may be set. According to a preset scheduling construction schedule, when a worker corresponding to the safety helmet is in a working period, if the safety helmet is detected not to be in a wearing state, corresponding prompt information can be sent. For example, a prompt message may be sent to a worker corresponding to the safety helmet, or a prompt message may be sent to an engineering manager.
In a possible implementation mode, an image including facial information of a wearer can be collected according to a camera arranged on the safety helmet, whether the current wearer is a worker corresponding to the safety helmet is judged through facial recognition, and if not, prompt information for wearing errors can be generated. Alternatively, in a possible implementation, the wearer-related data may be automatically recorded in dependence on the detected current wearer, the recording of the behavioral perception of the identified wearer being performed.
In S202, a skeletal pose of the wearer and a three-dimensional scene image around the wearer are generated from the acquired image data.
In the embodiment of the application, a third person vision image of the wearer can be acquired through a second camera in the image sensing device, and the positions of the bone joint points of the hands and the feet of the wearer are determined according to arm and foot images included in the third person vision image.
When the skeleton node positions of the hand or the foot are identified, matching can be performed according to the preset characteristic information of the skeleton node positions of the hand and the foot, and the skeleton nodes or the positions of the skeleton key points included in the hand and foot image included in the third person vision image are determined.
The position of the bone node can be determined based on the position of the wearer's bone at any one time. According to the continuously acquired skeleton postures, the behavior and the action of the wearer can be effectively recognized.
The third person refers to the visual image and may be an image of the viewing angle of the wearer, i.e. the image of the wearer is included in the visual image of the third person.
A first person visual image of a scene around a wearer can be acquired through a first camera, and a three-dimensional scene image around the wearer is generated through the first person visual image and by combining a synchronous positioning and mapping method. The first-person visual image may be an image of a wearer's perspective, that is, an image of a wearer's first-person perspective.
The first camera used for acquiring the first person visual image can be arranged at the front edge of the safety helmet and is aligned to the front or the front lower direction of the safety helmet. The current direction information of the wearer and the current movement information of the wearer can be acquired by acquiring the first camera of the first person vision image.
In S203, trajectory data of the wearer is determined from the displacement data.
In determining the trajectory data of the wearer, this may be done by way of preliminary trajectory generation and trajectory correction.
The preliminary track generation can acquire the movement information of the wearer through an inertial sensor arranged on the safety helmet, wherein the movement information comprises a movement distance and a movement direction. And preliminarily generating the trajectory data of the wearer according to the determined moving distance and moving direction.
After the preliminary track is generated, after synchronous positioning and mapping are completed according to the first-person visual image acquired by the first camera, the position of the wearer in the three-dimensional scene image can be accurately acquired according to the three-dimensional scene image acquired by synchronous positioning and mapping, in combination with the first-person visual image acquired by the safety helmet at present, through image matching positioning, so that the wearer can be more accurately positioned, and the accuracy of the track information of the wearer is favorably improved.
In a possible implementation, the trajectory data of the wearer can also be generated directly from the displacement distance and the displacement direction.
In S204, the wearer behavior is perceived according to the bone pose, the three-dimensional scene image, and the trajectory data.
From the determined skeletal pose of the wearer, behavioral action information of the user may be obtained. According to the acquired behavior and action information, the working habits of the wearer can be objectively recorded. When the recorded working habits are compared with the standard working habits, whether the working habits of the wearer are standard or not can be found. If the work habits of the wearer are found to differ from the standard work habits by more than a predicted difference threshold, skill education or training may be performed on the worker wearing the headgear.
Or determining quantitative data of the workload of the worker wearing the safety helmet according to the determined change information of the bone posture of the wearer and the determined track data, wherein the information comprises the duration of the action corresponding to the bone posture and the track data or the duration frequency of the action. And according to the recorded quantitative data, performing performance evaluation calculation for workers more objectively and accurately.
Or the progress of the construction task executed by the wearer can be sensed in real time according to the generated three-dimensional scene image, so that the construction progress can be tracked in real time conveniently.
In addition, the image recorded by the second camera or the bone posture corresponding to the image recorded by the second camera can be used for accident analysis. And analyzing the reason of the accident through the image recorded by the second camera or the determined bone posture, so as to conveniently divide and define the responsibility of the accident.
In the embodiment of the application, when the bone posture of the wearer is determined according to the acquired third person vision image, the determination can be completed through a behavior analysis network model. For example, the behavior analysis network model may be trained by preset sample data of a visual image of a third person in different bone postures. And according to the trained behavior analysis network model, recognizing the bone posture of the newly acquired third person named visual image.
In order to improve the recognition efficiency of the skeleton gesture, the behavior analysis network model can be triggered to recognize when an accident occurs or when a preset task is executed, so that the system calculation amount is reduced, and the system operation efficiency is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Fig. 3 is a schematic diagram of a behavior sensing apparatus according to an embodiment of the present application, as shown in fig. 3, the apparatus includes:
a data acquisition unit 301 for acquiring image data acquired by an image sensing device provided on the helmet and displacement data acquired by an inertial sensor provided on the helmet;
an image data processing unit 302 for generating a bone pose of the wearer and a three-dimensional scene image around the wearer from the acquired image data;
a trajectory data generating unit 303, configured to determine trajectory data of the wearer according to the displacement data;
a behavior sensing unit 304, configured to sense the wearer behavior according to the bone pose, the three-dimensional scene image, and the trajectory data.
The behavior sensing apparatus shown in fig. 3 corresponds to the behavior sensing method shown in fig. 2.
Fig. 4 is a schematic diagram of a behavior awareness apparatus according to an embodiment of the present application. As shown in fig. 4, the behavior sensing apparatus 4 of this embodiment includes: a processor 40, a memory 41 and a computer program 42, such as a behavior aware program, stored in said memory 41 and executable on said processor 40. The processor 40, when executing the computer program 42, implements the steps in the various behavior awareness method embodiments described above. Alternatively, the processor 40 implements the functions of the modules/units in the above-described device embodiments when executing the computer program 42.
Illustratively, the computer program 42 may be partitioned into one or more modules/units that are stored in the memory 41 and executed by the processor 40 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 42 in the behavior awareness apparatus 4.
The behavior awareness device may include, but is not limited to, a processor 40, a memory 41. Those skilled in the art will appreciate that fig. 4 is merely an example of the behavior awareness apparatus 4, and does not constitute a limitation of the behavior awareness apparatus 4, and may include more or fewer components than those shown, or combine certain components, or different components, for example, the behavior awareness apparatus may also include an input output device, a network access device, a bus, etc.
The Processor 40 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 41 may be an internal storage unit of the behavior awareness apparatus 4, such as a hard disk or a memory of the behavior awareness apparatus 4. The memory 41 may also be an external storage device of the behavior sensing device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the behavior sensing device 4. Further, the memory 41 may also include both an internal storage unit and an external storage device of the behavior sensing device 4. The memory 41 is used for storing the computer programs and other programs and data required by the behavior awareness apparatus. The memory 41 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a computer readable storage medium, and when the computer program is executed by a processor, the steps of the methods described above can be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain other components which may be suitably increased or decreased as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media which may not include electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.
Claims (10)
1. A method of behavioral awareness, the method comprising:
acquiring image data acquired by image sensing equipment arranged on a safety helmet and displacement data acquired by an inertial sensor arranged on the safety helmet;
generating a bone posture of the wearer and a three-dimensional scene image around the wearer according to the acquired image data;
determining trajectory data of the wearer from the displacement data;
and sensing the wearer behavior according to the bone posture, the three-dimensional scene image and the track data.
2. The method of claim 1, wherein the image sensing device comprises a first camera and a second camera, acquiring image data captured by the image sensing device disposed on a hard hat, comprising:
acquiring a first-person visual image of a wearer, which is acquired by a first camera arranged on a safety helmet;
a third person vision image of the wearer captured by a second camera disposed on the headgear is acquired.
3. The method of claim 2, wherein generating a skeletal pose of the wearer from the acquired image data comprises:
identifying bone key points of the wearer according to the third person named visual image of the wearer acquired by the second camera;
determining a skeletal pose of the wearer from the identified skeletal keypoint information of the wearer.
4. The method of claim 2, generating a three-dimensional scene image around the wearer from the acquired image data, comprising:
and acquiring a three-dimensional scene image around the wearer by a synchronous positioning and mapping method according to the first person visual image of the wearer acquired by the first camera.
5. The method of claim 4, wherein determining trajectory data for the wearer from the displacement data comprises:
generating the wearing track data according to the displacement data acquired by the inertial sensor;
and correcting the track data according to the position of the first person vision image acquired by the first camera in the three-dimensional panoramic image.
6. The method of claim 1, wherein after perceiving the wearer behavior from the bone pose, three dimensional scene image, and the trajectory data, the method further comprises:
and when the abnormal behavior of the wearer is detected, inputting the perceived behavior of the wearer into a preset behavior analysis network model to obtain an analysis result of the perceived behavior.
7. A behavior awareness apparatus, comprising:
the data acquisition unit is used for acquiring image data acquired by image sensing equipment arranged on the safety helmet and displacement data acquired by an inertial sensor arranged on the safety helmet;
the image data processing unit is used for generating a bone posture of the wearer and a three-dimensional scene image around the wearer according to the acquired image data;
a trajectory data generation unit for determining trajectory data of the wearer according to the displacement data;
and the behavior sensing unit is used for sensing the behavior of the wearer according to the bone posture, the three-dimensional scene image and the track data.
8. The behavior perception safety helmet is characterized by comprising a first camera, a second camera, an inertial sensor and a communication module, wherein the first camera is used for collecting a first person visual image of a wearer, the second camera is used for collecting a third person visual image of the wearer, the inertial sensor is used for collecting displacement data of the wearer, and the communication module is used for sending the data collected by the first camera, the second camera and the inertial sensor to a server so that the server can conduct behavior perception analysis according to the collected data.
9. A behavior awareness apparatus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110378299.4A CN113114994A (en) | 2021-04-08 | 2021-04-08 | Behavior sensing method, device and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110378299.4A CN113114994A (en) | 2021-04-08 | 2021-04-08 | Behavior sensing method, device and equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113114994A true CN113114994A (en) | 2021-07-13 |
Family
ID=76714904
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110378299.4A Pending CN113114994A (en) | 2021-04-08 | 2021-04-08 | Behavior sensing method, device and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113114994A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115131874A (en) * | 2022-06-29 | 2022-09-30 | 深圳市神州云海智能科技有限公司 | User behavior recognition prediction method and system and intelligent safety helmet |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106559664A (en) * | 2015-09-30 | 2017-04-05 | 成都理想境界科技有限公司 | The filming apparatus and equipment of three-dimensional panoramic image |
WO2018077176A1 (en) * | 2016-10-26 | 2018-05-03 | 北京小鸟看看科技有限公司 | Wearable device and method for determining user displacement in wearable device |
US20180249144A1 (en) * | 2017-02-28 | 2018-08-30 | Mitsubishi Electric Research Laboratories, Inc. | System and Method for Virtually-Augmented Visual Simultaneous Localization and Mapping |
CN111175972A (en) * | 2019-12-31 | 2020-05-19 | Oppo广东移动通信有限公司 | Head-mounted display, scene display method thereof and storage medium |
CN111680562A (en) * | 2020-05-09 | 2020-09-18 | 北京中广上洋科技股份有限公司 | Human body posture identification method and device based on skeleton key points, storage medium and terminal |
CN112509179A (en) * | 2020-12-18 | 2021-03-16 | 大连理工大学 | Automobile black box with omnidirectional collision perception and three-dimensional scene reappearance inside and outside automobile |
-
2021
- 2021-04-08 CN CN202110378299.4A patent/CN113114994A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106559664A (en) * | 2015-09-30 | 2017-04-05 | 成都理想境界科技有限公司 | The filming apparatus and equipment of three-dimensional panoramic image |
WO2018077176A1 (en) * | 2016-10-26 | 2018-05-03 | 北京小鸟看看科技有限公司 | Wearable device and method for determining user displacement in wearable device |
US20180249144A1 (en) * | 2017-02-28 | 2018-08-30 | Mitsubishi Electric Research Laboratories, Inc. | System and Method for Virtually-Augmented Visual Simultaneous Localization and Mapping |
CN111175972A (en) * | 2019-12-31 | 2020-05-19 | Oppo广东移动通信有限公司 | Head-mounted display, scene display method thereof and storage medium |
CN111680562A (en) * | 2020-05-09 | 2020-09-18 | 北京中广上洋科技股份有限公司 | Human body posture identification method and device based on skeleton key points, storage medium and terminal |
CN112509179A (en) * | 2020-12-18 | 2021-03-16 | 大连理工大学 | Automobile black box with omnidirectional collision perception and three-dimensional scene reappearance inside and outside automobile |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115131874A (en) * | 2022-06-29 | 2022-09-30 | 深圳市神州云海智能科技有限公司 | User behavior recognition prediction method and system and intelligent safety helmet |
CN115131874B (en) * | 2022-06-29 | 2023-10-17 | 深圳市神州云海智能科技有限公司 | User behavior recognition prediction method, system and intelligent safety helmet |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110660186B (en) | Method and device for identifying target object in video image based on radar signal | |
CN111815754B (en) | Three-dimensional information determining method, three-dimensional information determining device and terminal equipment | |
JP6159179B2 (en) | Image processing apparatus and image processing method | |
CN110738150B (en) | Camera linkage snapshot method and device and computer storage medium | |
CN110852183B (en) | Method, system, device and storage medium for identifying person without wearing safety helmet | |
CN112229524A (en) | Body temperature screening method and device, terminal equipment and storage medium | |
CN104954747B (en) | Video monitoring method and device | |
CN110647811A (en) | Human face posture detection method and device and computer readable storage medium | |
CN110889376A (en) | Safety helmet wearing detection system and method based on deep learning | |
CN108288025A (en) | A kind of car video monitoring method, device and equipment | |
CN114187561A (en) | Abnormal behavior identification method and device, terminal equipment and storage medium | |
CN111753587A (en) | Method and device for detecting falling to ground | |
CN113114994A (en) | Behavior sensing method, device and equipment | |
CN112101288A (en) | Method, device and equipment for detecting wearing of safety helmet and storage medium | |
CN118351572A (en) | Personnel detection method and related device | |
CN110991292A (en) | Action identification comparison method and system, computer storage medium and electronic device | |
CN112001050B (en) | Equipment debugging control method and device, electronic equipment and readable storage medium | |
CN113326713B (en) | Action recognition method, device, equipment and medium | |
CN116778550A (en) | Personnel tracking method, device and equipment for construction area and storage medium | |
CN112949606B (en) | Method and device for detecting wearing state of work clothes, storage medium and electronic device | |
CN115527265A (en) | Motion capture method and system based on physical training | |
CN113469150B (en) | Method and system for identifying risk behaviors | |
CN113920544A (en) | Safety management system and method for stamping workshop and electronic equipment | |
CN111435535B (en) | Method and device for acquiring joint point information | |
CN114333061B (en) | Method, device and terminal for identifying operator action violations |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210713 |
|
RJ01 | Rejection of invention patent application after publication |