CN114399535A - Multi-person behavior recognition device and method based on artificial intelligence algorithm - Google Patents

Multi-person behavior recognition device and method based on artificial intelligence algorithm Download PDF

Info

Publication number
CN114399535A
CN114399535A CN202210050131.5A CN202210050131A CN114399535A CN 114399535 A CN114399535 A CN 114399535A CN 202210050131 A CN202210050131 A CN 202210050131A CN 114399535 A CN114399535 A CN 114399535A
Authority
CN
China
Prior art keywords
area
determining
region
preset
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210050131.5A
Other languages
Chinese (zh)
Inventor
海拉提·恰凯
杨柳
黎红
王涛
郭江涛
李志刚
孙博文
柳瑞
魏乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Xinjiang Electric Power CorporationInformation & Telecommunication Co ltd
Original Assignee
State Grid Xinjiang Electric Power CorporationInformation & Telecommunication Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Xinjiang Electric Power CorporationInformation & Telecommunication Co ltd filed Critical State Grid Xinjiang Electric Power CorporationInformation & Telecommunication Co ltd
Priority to CN202210050131.5A priority Critical patent/CN114399535A/en
Publication of CN114399535A publication Critical patent/CN114399535A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of behavior recognition, and particularly discloses a multi-person behavior recognition device and a multi-person behavior recognition method based on an artificial intelligence algorithm, wherein the method comprises the steps of acquiring a region image containing a heat source layer, and determining a motion region and a reference region in the region image according to the heat source layer; calculating the area range of the motion area, and determining an independent area and a collective area according to the area range; segmenting the set region to obtain sub-regions; marking characteristic points according to the independent areas and the sub-areas; and extracting a motion track based on the marked feature points, and determining the risk value of the feature points according to the motion track. According to the technical scheme, the area identification is carried out on the area image, then the characteristic points of each area are determined, the motion track is determined according to the characteristic points, and the behavior risk value is determined according to the motion track, so that the range of the existing identification technology is expanded, particularly the images are gathered.

Description

Multi-person behavior recognition device and method based on artificial intelligence algorithm
Technical Field
The invention relates to the technical field of behavior recognition, in particular to a multi-person behavior recognition device and method based on an artificial intelligence algorithm.
Background
With the development of computer technology, the determination of human behavior by a computer has been widely applied, for example, in scenes such as intelligent video monitoring, patient monitoring systems, and smart homes, so it is becoming popular to research how to accurately determine human behavior by a computer.
The existing multi-person behavior recognition method mainly recognizes an individual human body region, when aggregation occurs in the region, the existing recognition method is easy to make mistakes, and how to solve the problem is the technical problem to be solved by the technical scheme of the invention.
Disclosure of Invention
The invention aims to provide a multi-behavior recognition device and a multi-behavior recognition method based on an artificial intelligence algorithm, so as to solve the problems in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme:
a multi-person behavior recognition apparatus based on an artificial intelligence algorithm, the apparatus comprising:
the area determining module is used for acquiring an area image containing a heat source layer, and determining a motion area and a reference area in the area image according to the heat source layer; wherein the reference area is a mapping of a reference heat source of the area in an area image; the region image takes a time item as an index;
the range detection module is used for calculating the area range of the motion area and determining an independent area and a collective area according to the area range;
the region segmentation module is used for identifying the content of the aggregation region and segmenting the aggregation region according to the content identification result to obtain a sub region;
the characteristic marking module is used for determining characteristic points according to the independent area and the sub-area, acquiring the position information of the characteristic points, determining distribution information according to the position information, and marking the characteristic points according to the distribution information;
and the track determining module is used for extracting the area images in the preset time period based on the marked feature points, determining the motion tracks of the feature points according to the area images at different moments, and determining the risk values of the feature points according to the motion tracks.
As a further scheme of the invention: the range detection module includes:
the total number calculating unit is used for determining a contour curve in the heat source layer according to a preset heat value and calculating the total number of pixel points in the contour curve;
the first marking unit is used for comparing the total number of the pixel points with a preset total number threshold value, and marking the motion area as an independent area when the total number of the pixel points is within a preset total number range;
and the second marking unit is used for marking the motion area as an aggregation area when the total number of the pixel points exceeds a preset total number range.
As a further scheme of the invention: the region cutting module includes:
the contour recognition unit is used for carrying out contour recognition on the set region according to a preset tolerance and determining a target region according to a contour recognition result;
the assignment unit is used for determining the central point of the target area, counting the color values in the target area, calculating the average value of the color values, and assigning the central point according to the average value of the color values;
the central dot matrix generating unit is used for counting the assigned central points and generating a central dot matrix which is in a mapping relation with the set area;
and the processing execution unit is used for determining an ear region in the central dot matrix according to a preset characteristic framework, and segmenting the set region according to the ear region to obtain sub-regions.
As a further scheme of the invention: the process execution unit includes:
the content identification subunit is used for carrying out content identification on the ear area and determining an ear outline;
the position determining subunit is used for determining an image acquisition end position according to the position of the reference area and determining orientation information according to the ear contour and the acquisition end position;
and the segmentation subunit is used for segmenting the set region according to the orientation information and the contour recognition result.
As a further scheme of the invention: the feature labeling module includes:
the width generating unit is used for sequentially reading the maximum pixel points of the independent area and the sub-area in the preset direction as the width;
the theoretical point determining unit is used for acquiring the total number of pixel points of the independent area and the sub-area and calculating theoretical points according to the total number of the pixel points and the width;
the detection unit is used for detecting a pixel point according to a preset incremental detection radius by taking the theoretical point as a center, and when the pixel point is detected, taking the pixel point as a characteristic point;
the array generating unit is used for generating a coordinate system according to the reference area, acquiring the position information of each characteristic point based on the coordinate system and generating a position dot matrix; calculating the position difference of adjacent characteristic points in the direction of the coordinate system, and generating a difference array which is in a mapping relation with the position lattice; wherein the difference array is a two-dimensional array;
and the marking unit is used for inputting the difference value array into a trained analysis model to obtain the discrete value of each characteristic point, and marking the characteristic points in the position lattice according to the discrete value.
As a further scheme of the invention: the trajectory determination module comprises:
the position extraction unit is used for extracting the area images in the preset time period based on the marked feature points and extracting the positions of the feature points in the area images at different moments;
the curve inserting unit is used for inserting the positions of the characteristic points in the images of the areas at different moments into a preset background image and generating a motion curve in the background image;
and the inflection point identification unit is used for identifying the inflection point of the motion curve and determining the risk value of the characteristic point according to the inflection point identification result.
As a further scheme of the invention: the inflection point identifying unit includes:
the sampling point determining subunit is used for sequentially determining sampling points on the motion curve according to a preset detection step length;
the curvature calculating subunit is used for calculating the curvature of the curve in a preset detection radius by taking the sampling point as a center;
the comparison subunit is used for comparing the curvature of the curve with a preset curvature threshold, and when the curvature of the curve reaches the preset curvature threshold, the sampling point is used as an inflection point, and the inflection point is assigned according to the curvature of the curve;
and the calculating subunit is used for determining the risk value of the characteristic point according to the assigned inflection point.
The technical scheme of the invention also provides a multi-person behavior identification method based on an artificial intelligence algorithm, which comprises the following steps:
acquiring a region image containing a heat source layer, and determining a motion region and a reference region in the region image according to the heat source layer; wherein the reference area is a mapping of a reference heat source of the area in an area image; the region image takes a time item as an index;
calculating the area range of the motion area, and determining an independent area and a collective area according to the area range;
performing content identification on the set region, and segmenting the set region according to a content identification result to obtain a sub-region;
determining feature points according to the independent area and the sub-area, acquiring position information of the feature points, determining distribution information according to the position information, and marking the feature points according to the distribution information;
extracting area images in a preset time period based on the marked feature points, determining the motion trail of the feature points according to the area images at different moments, and determining the risk value of the feature points according to the motion trail.
As a further scheme of the invention: the step of calculating the area range of the motion area and determining the independent area and the collective area according to the area range comprises the following steps:
determining a contour curve in the heat source layer according to a preset heat value, and calculating the total number of pixel points in the contour curve;
comparing the total number of the pixel points with a preset total number threshold value, and marking the motion area as an independent area when the total number of the pixel points is within a preset total number range;
and when the total number of the pixel points exceeds a preset total number range, marking the motion area as an aggregation area.
As a further scheme of the invention: the content identification of the set region is carried out, and the set region is segmented according to the content identification result to obtain sub-regions, wherein the sub-regions comprise:
carrying out contour recognition on the set region according to a preset tolerance, and determining a target region according to a contour recognition result;
determining the central point of the target area, counting the color values in the target area, calculating a color value mean value, and assigning a value to the central point according to the color value mean value;
counting the assigned central points, and generating a central dot matrix which is in a mapping relation with the set area;
and determining an ear region in the central dot matrix according to a preset characteristic framework, and segmenting a set region according to the ear region to obtain sub-regions.
Compared with the prior art, the invention has the beneficial effects that: according to the technical scheme, the area identification is carried out on the area image, then the characteristic points of each area are determined, the motion track is determined according to the characteristic points, and the behavior risk value is determined according to the motion track, so that the range of the existing identification technology is expanded, particularly the images are gathered.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention.
Fig. 1 is a block diagram of a multi-person behavior recognition device based on an artificial intelligence algorithm.
Fig. 2 is a block diagram of a range detection module in the multi-person behavior recognition device based on an artificial intelligence algorithm.
Fig. 3 is a block diagram of a structure of a region segmentation module in the multi-person behavior recognition device based on an artificial intelligence algorithm.
Fig. 4 is a block diagram of a feature tag module in a multi-person behavior recognition device based on an artificial intelligence algorithm.
Fig. 5 is a block diagram of a track determination module in the multi-person behavior recognition device based on an artificial intelligence algorithm.
FIG. 6 is a flow chart of a multi-person behavior recognition method based on an artificial intelligence algorithm.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Example 1
Fig. 1 is a block diagram illustrating a structure of a multi-person behavior recognition device based on an artificial intelligence algorithm, in an embodiment of the present invention, the multi-person behavior recognition device based on the artificial intelligence algorithm includes:
the area determining module 11 is configured to acquire an area image containing a heat source layer, and determine a motion area and a reference area in the area image according to the heat source layer; wherein the reference area is a mapping of a reference heat source of the area in an area image; the region image takes a time item as an index;
a range detection module 12, configured to calculate a region range of the motion region, and determine an independent region and an aggregate region according to the region range;
the region segmentation module 13 is configured to perform content identification on the aggregation region, and segment the aggregation region according to a content identification result to obtain a sub-region;
the feature marking module 14 is configured to determine feature points according to the independent areas and the sub-areas, acquire position information of the feature points, determine distribution information according to the position information, and mark the feature points according to the distribution information;
and the track determining module 15 is configured to extract a region image of a preset time period based on the marked feature points, determine a motion track of the feature points according to the region images at different times, and determine a risk value of the feature points according to the motion track.
The purpose of the region determining module 11 is to obtain an image of a region containing a heat source layer, which may be obtained by two cameras separately and then performing image fusion, or may be two modes on one camera, one for obtaining temperature information and the other for obtaining a region image. It should be noted that, since the behavior is a time-related quantity for the purpose of behavior recognition in the technical solution of the present invention, a plurality of area images arranged in time series are often required for determining one behavior, and therefore, a time item is provided in the area image.
The range detection module 12 and the region segmentation module 13 are used for identifying behavior regions in the region image, the regions mainly include human body contours, some are individual complete contours, and some are aggregated contours, the identification modes of the regions and the regions are different, the individual complete contours can be directly compared and identified, and the aggregated contours need to be segmented and then identified.
The feature labeling module 14 is a problem profile identification module for determining whether each profile is a risk profile; and after the risk contour is extracted, determining the motion trail of the risk contour according to the time item of the region contour, and further identifying the behavior.
Fig. 2 is a block diagram of a range detection module in the multi-person behavior recognition device based on an artificial intelligence algorithm, where the range detection module 12 includes:
a total number calculating unit 121, configured to determine a profile curve in the heat source layer according to a preset thermal value, and calculate a total number of pixel points in the profile curve;
a first marking unit 122, configured to compare the total number of the pixels with a preset total number threshold, and mark the motion area as an independent area when the total number of the pixels is within a preset total number range;
a second marking unit 123, configured to mark the motion area as an aggregation area when the total number of the pixel points exceeds a preset total number range.
The range detection module 12 is further limited in the above, and the number of pixels inside the determined contour curve is calculated, and if the number is too large, it is a set region, and if the number is too small, it may not be a human body image. Therefore, the two sides of the comparison process are the total number of the pixel points and the preset total number range.
Fig. 3 is a block diagram of a structure of a region segmentation module in the multi-person behavior recognition device based on an artificial intelligence algorithm, where the region segmentation module 13 includes:
the contour recognition unit 131 is configured to perform contour recognition on the set region according to a preset tolerance, and determine a target region according to a contour recognition result;
the assignment unit 132 is configured to determine a center point of the target region, count color values in the target region, calculate a color value mean value, and assign a value to the center point according to the color value mean value;
a central dot matrix generating unit 133, configured to count the assigned central points, and generate a central dot matrix in a mapping relationship with the aggregation area;
and the processing execution unit 134 is configured to determine an ear region in the central dot matrix according to a preset feature framework, and segment the set region according to the ear region to obtain a sub-region.
The region segmentation module 13 is to segment the set region into identifiable sub-regions, and the segmentation is to perform contour identification on the set region, it should be noted that the tolerance is generally larger in value, for example, the tolerance is set to fifty, and the reason for this is to identify only a large contour in the set region. The reason for the aggregation region is that a group of people are gathered together, the body region of the person wearing the clothes is divided obviously, the hair is easy to identify in the simplest mode, the color values of the hair, the clothes and other regions are obvious in the most part, the contents are identified, and then the aggregation region is converted into a central dot matrix. It is worth mentioning that the above feature structure may be an outline adjacent to the hair region, which is regarded as an ear region.
Further, the processing execution unit includes:
the content identification subunit is used for carrying out content identification on the ear area and determining an ear outline;
the position determining subunit is used for determining an image acquisition end position according to the position of the reference area and determining orientation information according to the ear contour and the acquisition end position;
and the segmentation subunit is used for segmenting the set region according to the orientation information and the contour recognition result.
On the premise that the ear region is determined, the position information of an image acquisition end is acquired, and then the orientation information is determined, specifically, the orientation of a person can be judged by the ear contour, but the left and right directions relative to the image acquisition position can only be judged, and the direction of the east, south, west and north can be specifically determined according to the position information of the acquisition end.
Fig. 4 is a block diagram illustrating a structure of a feature tag module in a multi-person behavior recognition device based on an artificial intelligence algorithm, where the feature tag module 14 includes:
a width generating unit 141, configured to sequentially read the maximum number of pixel points of the independent area and the sub-area in the preset direction as a width;
a theoretical point determining unit 142, configured to obtain the total number of pixel points in the independent area and the sub-area, and calculate a theoretical point according to the total number of pixel points and the width;
the detection unit 143 is configured to detect a pixel point according to a preset incremental detection radius with the theoretical point as a center, and when a pixel point is detected, use the pixel point as a feature point;
an array generating unit 144, configured to generate a coordinate system according to the reference region, obtain location information of each feature point based on the coordinate system, and generate a location dot matrix; calculating the position difference of adjacent characteristic points in the direction of the coordinate system, and generating a difference array which is in a mapping relation with the position lattice; wherein the difference array is a two-dimensional array;
and the marking unit 145 is configured to input the difference array into the trained analysis model to obtain a discrete value of each feature point, and mark the feature point in the position lattice according to the discrete value.
The feature labeling module 14 is designed to label important information in independent regions and sub-regions, which are human body regions according to the above-mentioned contents, determine theoretical points of each region according to a simple principle of center of gravity mathematically, then query points closest to the theoretical points, which are feature points, and then determine feature points according to discrete values of the feature points. It is worth mentioning that if there are independent areas and collective areas in one area, the probability that the feature points of the independent areas are marked is extremely high.
Fig. 5 is a block diagram of a track determination module in the multi-person behavior recognition device based on an artificial intelligence algorithm, where the track determination module 15 includes:
a position extracting unit 151, configured to extract, based on the marked feature points, area images in a preset time period, and extract positions of the feature points in the area images at different times;
a curve inserting unit 152, configured to insert positions of feature points in the region images at different times into a preset background image, and generate a motion curve in the background image;
and an inflection point identification unit 153, configured to perform inflection point identification on the motion curve, and determine a risk value of the feature point according to an inflection point identification result.
Specifically, the inflection point identifying unit includes:
the sampling point determining subunit is used for sequentially determining sampling points on the motion curve according to a preset detection step length;
the curvature calculating subunit is used for calculating the curvature of the curve in a preset detection radius by taking the sampling point as a center;
the comparison subunit is used for comparing the curvature of the curve with a preset curvature threshold, and when the curvature of the curve reaches the preset curvature threshold, the sampling point is used as an inflection point, and the inflection point is assigned according to the curvature of the curve;
and the calculating subunit is used for determining the risk value of the characteristic point according to the assigned inflection point.
The above-mentioned content specifically defines the trajectory determination module 15, and the purpose of the trajectory determination module 15 is to determine the relationship between the feature point and time, specifically, the feature point in each area image is actually different, taking an area image as an example, if the feature point is detected in the area image, the area image of the previous period or the next period can be extracted according to the feature point, and then the trajectory of the feature point is determined, so as to generate a trajectory curve, and then the inflection point identification is performed on the trajectory curve, so that the risk value of the feature point can be determined. Wherein, the inflection point identification process obtains a point of which the curvature reaches a certain degree and a curvature value thereof. If risk values are to be calculated, a total curvature value may be calculated, so that the relationship between the number and the curvature value may be fitted.
It should be noted that if the above-mentioned identification process is performed on each region image, it is possible to repeatedly calculate the motion trajectory; in fact, it is possible to extract area images at certain time intervals, perform feature point recognition on the area images, extract area images for a certain period of time according to the feature point recognition result, perform feature point recognition on the area images, generate a track, and repeat the above recognition steps with a new area image as a base, which is a fast detection method.
Example 2
Fig. 6 is a flow chart of a multi-person behavior recognition method based on an artificial intelligence algorithm, in an embodiment of the present invention, the multi-person behavior recognition method based on the artificial intelligence algorithm includes:
step S100: acquiring a region image containing a heat source layer, and determining a motion region and a reference region in the region image according to the heat source layer; wherein the reference area is a mapping of a reference heat source of the area in an area image; the region image takes a time item as an index;
step S200: calculating the area range of the motion area, and determining an independent area and a collective area according to the area range;
step S300: performing content identification on the set region, and segmenting the set region according to a content identification result to obtain a sub-region;
step S400: determining feature points according to the independent area and the sub-area, acquiring position information of the feature points, determining distribution information according to the position information, and marking the feature points according to the distribution information;
step S500: extracting area images in a preset time period based on the marked feature points, determining the motion trail of the feature points according to the area images at different moments, and determining the risk value of the feature points according to the motion trail.
Further, the step of calculating the area range of the motion area and determining the independent area and the collective area according to the area range includes:
determining a contour curve in the heat source layer according to a preset heat value, and calculating the total number of pixel points in the contour curve;
comparing the total number of the pixel points with a preset total number threshold value, and marking the motion area as an independent area when the total number of the pixel points is within a preset total number range;
and when the total number of the pixel points exceeds a preset total number range, marking the motion area as an aggregation area.
Specifically, the identifying the content of the set region, and segmenting the set region according to the content identification result to obtain the sub-regions includes:
carrying out contour recognition on the set region according to a preset tolerance, and determining a target region according to a contour recognition result;
determining the central point of the target area, counting the color values in the target area, calculating a color value mean value, and assigning a value to the central point according to the color value mean value;
counting the assigned central points, and generating a central dot matrix which is in a mapping relation with the set area;
and determining an ear region in the central dot matrix according to a preset characteristic framework, and segmenting a set region according to the ear region to obtain sub-regions.
The functions which can be realized by the multi-person behavior recognition method based on the artificial intelligence algorithm are all completed by computer equipment, the computer equipment comprises one or more processors and one or more memories, at least one program code is stored in the one or more memories, and the program code is loaded and executed by the one or more processors to realize the functions of the multi-person behavior recognition method based on the artificial intelligence algorithm.
The processor fetches instructions and analyzes the instructions one by one from the memory, then completes corresponding operations according to the instruction requirements, generates a series of control commands, enables all parts of the computer to automatically, continuously and coordinately act to form an organic whole, realizes the input of programs, the input of data, the operation and the output of results, and the arithmetic operation or the logic operation generated in the process is completed by the arithmetic unit; the Memory comprises a Read-Only Memory (ROM) for storing a computer program, and a protection device is arranged outside the Memory.
Illustratively, a computer program can be partitioned into one or more modules, which are stored in memory and executed by a processor to implement the present invention. One or more of the modules may be a series of computer program instruction segments capable of performing certain functions, which are used to describe the execution of the computer program in the terminal device.
Those skilled in the art will appreciate that the above description of the service device is merely exemplary and not limiting of the terminal device, and may include more or less components than those described, or combine certain components, or different components, such as may include input output devices, network access devices, buses, etc.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is the control center of the terminal equipment and connects the various parts of the entire user terminal using various interfaces and lines.
The memory may be used to store computer programs and/or modules, and the processor may implement various functions of the terminal device by operating or executing the computer programs and/or modules stored in the memory and calling data stored in the memory. The memory mainly comprises a storage program area and a storage data area, wherein the storage program area can store an operating system, application programs (such as an information acquisition template display function, a product information publishing function and the like) required by at least one function and the like; the storage data area may store data created according to the use of the berth-state display system (e.g., product information acquisition templates corresponding to different product types, product information that needs to be issued by different product providers, etc.), and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
The terminal device integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the modules/units in the system according to the above embodiment may be implemented by a computer program, which may be stored in a computer-readable storage medium and used by a processor to implement the functions of the embodiments of the system. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, recording medium, U.S. disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution media, and the like.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A multi-person behavior recognition apparatus based on artificial intelligence algorithm, the apparatus comprising:
the area determining module is used for acquiring an area image containing a heat source layer, and determining a motion area and a reference area in the area image according to the heat source layer; wherein the reference area is a mapping of a reference heat source of the area in an area image; the region image takes a time item as an index;
the range detection module is used for calculating the area range of the motion area and determining an independent area and a collective area according to the area range;
the region segmentation module is used for identifying the content of the aggregation region and segmenting the aggregation region according to the content identification result to obtain a sub region;
the characteristic marking module is used for determining characteristic points according to the independent area and the sub-area, acquiring the position information of the characteristic points, determining distribution information according to the position information, and marking the characteristic points according to the distribution information;
and the track determining module is used for extracting the area images in the preset time period based on the marked feature points, determining the motion tracks of the feature points according to the area images at different moments, and determining the risk values of the feature points according to the motion tracks.
2. The artificial intelligence algorithm-based multi-person behavior recognition device according to claim 1, wherein the range detection module comprises:
the total number calculating unit is used for determining a contour curve in the heat source layer according to a preset heat value and calculating the total number of pixel points in the contour curve;
the first marking unit is used for comparing the total number of the pixel points with a preset total number threshold value, and marking the motion area as an independent area when the total number of the pixel points is within a preset total number range;
and the second marking unit is used for marking the motion area as an aggregation area when the total number of the pixel points exceeds a preset total number range.
3. The artificial intelligence algorithm-based multi-person behavior recognition device of claim 1, wherein the region segmentation module comprises:
the contour recognition unit is used for carrying out contour recognition on the set region according to a preset tolerance and determining a target region according to a contour recognition result;
the assignment unit is used for determining the central point of the target area, counting the color values in the target area, calculating the average value of the color values, and assigning the central point according to the average value of the color values;
the central dot matrix generating unit is used for counting the assigned central points and generating a central dot matrix which is in a mapping relation with the set area;
and the processing execution unit is used for determining an ear region in the central dot matrix according to a preset characteristic framework, and segmenting the set region according to the ear region to obtain sub-regions.
4. The artificial intelligence algorithm-based multi-behavior recognition apparatus according to claim 3, wherein the process execution unit includes:
the content identification subunit is used for carrying out content identification on the ear area and determining an ear outline;
the position determining subunit is used for determining an image acquisition end position according to the position of the reference area and determining orientation information according to the ear contour and the acquisition end position;
and the segmentation subunit is used for segmenting the set region according to the orientation information and the contour recognition result.
5. The artificial intelligence algorithm-based multi-behavior recognition apparatus according to claim 1, wherein the feature labeling module comprises:
the width generating unit is used for sequentially reading the maximum pixel points of the independent area and the sub-area in the preset direction as the width;
the theoretical point determining unit is used for acquiring the total number of pixel points of the independent area and the sub-area and calculating theoretical points according to the total number of the pixel points and the width;
the detection unit is used for detecting a pixel point according to a preset incremental detection radius by taking the theoretical point as a center, and when the pixel point is detected, taking the pixel point as a characteristic point;
the array generating unit is used for generating a coordinate system according to the reference area, acquiring the position information of each characteristic point based on the coordinate system and generating a position dot matrix; calculating the position difference of adjacent characteristic points in the direction of the coordinate system, and generating a difference array which is in a mapping relation with the position lattice; wherein the difference array is a two-dimensional array;
and the marking unit is used for inputting the difference value array into a trained analysis model to obtain the discrete value of each characteristic point, and marking the characteristic points in the position lattice according to the discrete value.
6. The artificial intelligence algorithm-based multi-person behavior recognition device according to claim 1, wherein the trajectory determination module comprises:
the position extraction unit is used for extracting the area images in the preset time period based on the marked feature points and extracting the positions of the feature points in the area images at different moments;
the curve inserting unit is used for inserting the positions of the characteristic points in the images of the areas at different moments into a preset background image and generating a motion curve in the background image;
and the inflection point identification unit is used for identifying the inflection point of the motion curve and determining the risk value of the characteristic point according to the inflection point identification result.
7. The artificial intelligence algorithm-based multi-behavior recognition apparatus according to claim 6, wherein the inflection point recognition unit includes:
the sampling point determining subunit is used for sequentially determining sampling points on the motion curve according to a preset detection step length;
the curvature calculating subunit is used for calculating the curvature of the curve in a preset detection radius by taking the sampling point as a center;
the comparison subunit is used for comparing the curvature of the curve with a preset curvature threshold, and when the curvature of the curve reaches the preset curvature threshold, the sampling point is used as an inflection point, and the inflection point is assigned according to the curvature of the curve;
and the calculating subunit is used for determining the risk value of the characteristic point according to the assigned inflection point.
8. A multi-person behavior identification method based on an artificial intelligence algorithm is characterized by comprising the following steps:
acquiring a region image containing a heat source layer, and determining a motion region and a reference region in the region image according to the heat source layer; wherein the reference area is a mapping of a reference heat source of the area in an area image; the region image takes a time item as an index;
calculating the area range of the motion area, and determining an independent area and a collective area according to the area range;
performing content identification on the set region, and segmenting the set region according to a content identification result to obtain a sub-region;
determining feature points according to the independent area and the sub-area, acquiring position information of the feature points, determining distribution information according to the position information, and marking the feature points according to the distribution information;
extracting area images in a preset time period based on the marked feature points, determining the motion trail of the feature points according to the area images at different moments, and determining the risk value of the feature points according to the motion trail.
9. The artificial intelligence algorithm-based multi-person behavior recognition method according to claim 8, wherein the step of calculating a region range of the motion region, and determining an independent region and a collective region according to the region range comprises:
determining a contour curve in the heat source layer according to a preset heat value, and calculating the total number of pixel points in the contour curve;
comparing the total number of the pixel points with a preset total number threshold value, and marking the motion area as an independent area when the total number of the pixel points is within a preset total number range;
and when the total number of the pixel points exceeds a preset total number range, marking the motion area as an aggregation area.
10. The artificial intelligence algorithm-based multi-person behavior recognition method according to claim 9, wherein the performing content recognition on the set region, segmenting the set region according to content recognition results, and obtaining sub-regions comprises:
carrying out contour recognition on the set region according to a preset tolerance, and determining a target region according to a contour recognition result;
determining the central point of the target area, counting the color values in the target area, calculating a color value mean value, and assigning a value to the central point according to the color value mean value;
counting the assigned central points, and generating a central dot matrix which is in a mapping relation with the set area;
and determining an ear region in the central dot matrix according to a preset characteristic framework, and segmenting a set region according to the ear region to obtain sub-regions.
CN202210050131.5A 2022-01-17 2022-01-17 Multi-person behavior recognition device and method based on artificial intelligence algorithm Pending CN114399535A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210050131.5A CN114399535A (en) 2022-01-17 2022-01-17 Multi-person behavior recognition device and method based on artificial intelligence algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210050131.5A CN114399535A (en) 2022-01-17 2022-01-17 Multi-person behavior recognition device and method based on artificial intelligence algorithm

Publications (1)

Publication Number Publication Date
CN114399535A true CN114399535A (en) 2022-04-26

Family

ID=81230268

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210050131.5A Pending CN114399535A (en) 2022-01-17 2022-01-17 Multi-person behavior recognition device and method based on artificial intelligence algorithm

Country Status (1)

Country Link
CN (1) CN114399535A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103854027A (en) * 2013-10-23 2014-06-11 北京邮电大学 Crowd behavior identification method
CN106127814A (en) * 2016-07-18 2016-11-16 四川君逸数码科技股份有限公司 A kind of wisdom gold eyeball identification gathering of people is fought alarm method and device
US20190371144A1 (en) * 2018-05-31 2019-12-05 Henry Shu Method and system for object motion and activity detection
CN111860383A (en) * 2020-07-27 2020-10-30 苏州市职业大学 Group abnormal behavior identification method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103854027A (en) * 2013-10-23 2014-06-11 北京邮电大学 Crowd behavior identification method
CN106127814A (en) * 2016-07-18 2016-11-16 四川君逸数码科技股份有限公司 A kind of wisdom gold eyeball identification gathering of people is fought alarm method and device
US20190371144A1 (en) * 2018-05-31 2019-12-05 Henry Shu Method and system for object motion and activity detection
CN111860383A (en) * 2020-07-27 2020-10-30 苏州市职业大学 Group abnormal behavior identification method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
雷庆;陈锻生;李绍滋;: "复杂场景下的人体行为识别研究新进展", 计算机科学, no. 12, 15 December 2014 (2014-12-15) *

Similar Documents

Publication Publication Date Title
WO2021047232A1 (en) Interaction behavior recognition method, apparatus, computer device, and storage medium
Shen et al. Individual identification of dairy cows based on convolutional neural networks
US8379920B2 (en) Real-time clothing recognition in surveillance videos
CN110781859B (en) Image annotation method and device, computer equipment and storage medium
US20120288189A1 (en) Image processing method and image processing device
CN111259889A (en) Image text recognition method and device, computer equipment and computer storage medium
CN105260750B (en) A kind of milk cow recognition methods and system
Vanetti et al. Gas meter reading from real world images using a multi-net system
CN111160169B (en) Face detection method, device, equipment and computer readable storage medium
CN110222582B (en) Image processing method and camera
CN110163864B (en) Image segmentation method and device, computer equipment and storage medium
CN110837580A (en) Pedestrian picture marking method and device, storage medium and intelligent device
CN111339975A (en) Target detection, identification and tracking method based on central scale prediction and twin neural network
CN114758249B (en) Target object monitoring method, device, equipment and medium based on field night environment
WO2021169642A1 (en) Video-based eyeball turning determination method and system
Phyo et al. A hybrid rolling skew histogram-neural network approach to dairy cow identification system
CN113989944A (en) Operation action recognition method, device and storage medium
CN112801236A (en) Image recognition model migration method, device, equipment and storage medium
CN110796145B (en) Multi-certificate segmentation association method and related equipment based on intelligent decision
CN113569627A (en) Human body posture prediction model training method, human body posture prediction method and device
CN113780116A (en) Invoice classification method and device, computer equipment and storage medium
CN112102235B (en) Human body part recognition method, computer device, and storage medium
CN114399535A (en) Multi-person behavior recognition device and method based on artificial intelligence algorithm
CN111680680A (en) Object code positioning method and device, electronic equipment and storage medium
CN115905733A (en) Mask wearing abnormity detection and trajectory tracking method based on machine vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination