Disclosure of Invention
The invention provides a personal solitary value detection method based on artificial intelligence, which comprises the following steps: acquiring an area video image, acquiring real-time position information of each target in the area video image, and carrying out face recognition on each target; the regional video image comprises at least one target; acquiring action tracks of all targets through real-time position information, and calculating action distances of all targets; detecting the head gesture of each target to obtain a visual attention value of each target; obtaining each target interaction value through the action distance and the visual attention value; establishing a directed weighted graph, and respectively acquiring a paying value and a harvesting value of each target interaction value through an adjacent matrix of the directed weighted graph; and obtaining the solitary value of each target through the payout value and the harvest value.
According to the technical means provided by the invention, the influence of the action track and the visual attention value of the student on the interaction value is considered, whether the individual is solitary or not is quantified, the solitary value is measured through individual paying and individual feedback, so that the measurement result is more accurate, the solitary condition of the student is accurately analyzed, corresponding psychological health coaching is carried out on the student aiming at the solitary values of different degrees, and the feeling of the student is more met.
The invention adopts the following technical scheme that the personal solitary value detection method based on artificial intelligence comprises the following steps:
acquiring an area video image, acquiring real-time position information of each target in the area video image, and carrying out face recognition on each target; the region video image comprises at least one target.
And acquiring the action track of each target through the real-time position information, and calculating the action distance of each target.
And detecting the head gesture of each target to obtain a visual attention value of each target.
And obtaining each target interaction value through the action distance and the visual attention value.
And establishing a directed weighted graph according to each target interaction value, and respectively acquiring a paying-out value and a harvesting value of each target interaction value through an adjacent matrix of the directed weighted graph.
And obtaining the individual value of each target through the paying-out value and the harvesting value.
Further, the personal solitary value detection method based on artificial intelligence further comprises the following steps after passing through the payout value and the harvest value to obtain the solitary value of the target:
when the solitary value isAnd when the target is in the isolated state, the isolated value is indicated to correspond to the target, and no isolated feeling exists.
When the solitary value isWhen the target is in the isolated state, the isolated state value is indicated to have isolated feeling corresponding to the target;
and constructing a function curve by the single value through an interpolation method, and if the single value is reduced in a fixed time interval K, indicating that the single value can be self-regulated corresponding to the target without assistance.
If the orphan value does not change or increase within the fixed time interval K, it indicates that the orphan value fails to self-regulate corresponding to the target, and assistance is needed.
Furthermore, an artificial intelligence-based personal solitary value detection method obtains the solitary value of each target through the payment value and the harvest valueComprising:
wherein ,for each of said targets an orphan value, +.>A payout value for each of said target interaction values, < >>And obtaining a harvest value for each target interaction value.
Further, an artificial intelligence-based personal solitary value detection method is provided, and each target interaction value is obtained through the action distance and the visual attention valueComprising:
wherein ,express goal->For object->Is (are) interactive value>Representing the distance of action of target A, +.>Representing the visual attention of object a to object B.
Further, an artificial intelligence-based personal solitary value detection method for detecting head gestures of each target to obtain visual attention values of each target comprises the following steps:
for each of the targetsHead gesture detection, namely obtaining the visual residence time of each target when the head gesture detection accords with the visual field observation gesture, and calculating the visual attention value of each targetThe expression is:
wherein ,visual attention value representing object a to object B, is->For the number of observations of the object, +.>Indicating the visual retention time of the object at the ith look and feel, < >>Is a preset threshold.
Further, the method for detecting the individual solitary value based on artificial intelligence obtains the action track of each target through the change of the real-time position information of each target, calculates the action distance of each target, and comprises the following steps:
determining the initial position of each target, comparing the real-time position of each target with the initial position by monitoring, obtaining the action track of each target, and performing perspective transformation on the position coordinates of each target in a world coordinate system after the real-time position of each target is stable, so as to obtain the action distance of each target.
Further, the method for detecting the personal solitary value based on the artificial intelligence establishes a directed weighted graph according to each target interaction value, and comprises the following steps:
in the directed weighted graph, each object is taken as a vertex, the vertices are connected in a bidirectional manner, and the object interaction value with the direction is taken as the weight value of the connecting line direction between the vertices.
Further, an artificial intelligence-based personal solitary value detection method, which respectively obtains a paying value and a harvesting value of each target interaction value through the adjacency matrix of the directed weighted graph, comprises the following steps:
the payout value is the sum of all values of the row of the adjacency matrix for each of the targets.
The harvest value is the sum of all values of the column of the adjacency matrix corresponding to each target
The beneficial effects of the invention are as follows: according to the technical means provided by the invention, the influence of the action track and the visual attention value of the student on the interaction value is considered, whether the individual is solitary or not is quantified, the solitary value is measured through individual paying and individual feedback, so that the measurement result is more accurate, the solitary condition of the student is accurately analyzed, corresponding psychological health coaching is carried out on the student aiming at the solitary values of different degrees, and the feeling of the student is more met.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
As shown in fig. 1, a personal solitary value detection method based on artificial intelligence according to an embodiment of the present invention is provided, including:
101. acquiring an area video image, acquiring real-time position information of each target in the area video image, and carrying out face recognition on each target; the region video image comprises at least one target.
In this embodiment, the area video is taken as an example of a primary school classroom, and each object in the area video image represents each individual pupil.
In order to make the acquired video image have an analytical meaning, the acquired video comprises at least one target, namely an individual pupil.
When the regional video is collected, cameras are arranged in the primary school classroom and are arranged in front of and behind the primary school classroom, the visual field range covers the whole classroom, and each student individual in the classroom can be shot.
The classroom size is fixed, and the monitoring cameras are installed by using a chessboard method, so that the camera coordinate system and the world coordinate system are corrected, and the position of students in the video can be converted into the real position of the students in reality through the obtained regional video.
102. And acquiring the action track of each target through the real-time position information, and calculating the action distance of each target.
When the students are in class, the students can return to the seats, the positions of the students at the positions of the seats in the classrooms at the moment are obtained, and the positions of the students at the moment are taken as initial positions.
During the class, students can continuously interact and play, or go to find other friends to play, the positions of the students can be changed, and the actual positions of the students are obtained through video cameras installed in classrooms.
When the college A takes upWhen finding the classmate B, respectively calculating the action distance of the classmate A when A, B is stable, i.e. the positions of both sides are not changedAnd action distance of classmate B->。
When the student position is changing, the current action is continuously increasing and unstable, or the student may keep looking at the state during the action, and the final position has uncertainty, in this embodiment, whether the student position is stable is judged by setting a time threshold M.
103. And detecting the head gesture of each target to obtain a visual attention value of each target.
When a student approaches to another student, not only the position of the student changes, but also the head of the student turns to the corresponding direction, and when the student obtains the vision observation posture, the first student in the vision is taken as the object of attention of the student through head posture detection.
Since people sometimes look fast, and one person is concerned with another person who is not well reflected by the head posture, a time threshold n is set in this embodiment to limit whether the current colleague really looks at the colleague.
104. And each target interaction value is obtained through the action distance and the visual attention.
Because the interaction is bidirectional, but the interaction values are different in the interaction process, the interaction value of each target has a direction, when the college A approaches to the college B, and the visual attention point of the college A is the college B, the interaction value is mainly the college A, and the interaction value is expressed as。
105. And establishing a directed weighted graph according to each target interaction value, and respectively acquiring a paying-out value and a harvesting value of each target interaction value through an adjacent matrix of the directed weighted graph.
The directed weighted graph is built in the class, so that the interaction condition of each classmate to other classmates can be better observed and obtained.
And taking each student as a vertex in the graph, wherein the vertices are connected in a full-connection mode, the vertices are connected in a bidirectional mode, and the interaction value with the direction is taken as the weight value of the connection direction of the vertices in the graph.
The interaction value indicates the flow between the mutual paying out and harvesting of the classmates, namely that the visual attention value of the classmate A to the classmate B is high or the action distance of the visual attention value of the classmate A to the classmate B is more, and the interaction value indicates that the classmate A pays out higher in the interaction; likewise, harvest indicates that after classmate A pays out, classmate B's feedback to classmate A, i.e. whether classmate B's interaction value to classmate A is close.
106. Obtaining individual values of each target through the paying-out value and the harvesting value
Obtaining an orphan value by calculating the relationship between the attention paid and the attention harvested of the student at a certain time interval
Because in the interactive interaction, when the effort of the interaction of other people is not proportional to the harvest of the other people, the interaction returns to generate the solitary feeling. If the classmates areOnce isolated in class, classmates +.>Interaction of other persons in the class is paid, and the other classmates cannot obtain the +.>Is used for the interactive harvest of the seeds.
According to the technical means provided by the invention, the influence of the action track and the visual attention value of the student on the interaction value is considered, whether the individual is solitary or not is quantified, the solitary value is measured through individual paying and individual feedback, so that the measurement result is more accurate, the solitary condition of the student is accurately analyzed, corresponding psychological health coaching is carried out on the student aiming at the solitary values of different degrees, and the feeling of the student is more met.
Example 2
As shown in fig. 2, a personal solitary value detection method based on artificial intelligence according to an embodiment of the present invention is provided, including:
201. acquiring an area video image, acquiring real-time position information of each target in the area video image, and carrying out face recognition on each target; the region video image comprises at least one target.
In order to make the acquired video image have an analytical meaning, the acquired video comprises at least one target, namely an individual pupil.
When the regional video is collected, cameras are arranged in the primary school classroom and are arranged in front of and behind the primary school classroom, the visual field range covers the whole classroom, and each student individual in the classroom can be shot.
The classroom size is fixed, and the monitoring cameras are installed by using a chessboard method, so that the camera coordinate system and the world coordinate system are corrected, and the position of students in the video can be converted into the real position of the students in reality through the obtained regional video.
Face data of students in a class are collected and stored in a database, and the collected face database is trained through a CNN neural network, so that the face database has a face recognition function, identities of corresponding classmates can be identified from monitoring, and the face recognition data is obtained under legal conditions.
202. And acquiring the action track of each target through the real-time position information, and calculating the action distance of each target.
Acquiring action tracks of the targets through the change of the real-time position information of the targets, and calculating action distances of the targets, wherein the method comprises the following steps:
determining the initial position of each target, comparing the real-time position of each target with the initial position by monitoring, obtaining the action track of each target, and performing perspective transformation on the position coordinates of each target in a world coordinate system after the real-time position of each target is stable, so as to obtain the action distance of each target.
When the students are in class, the students can return to the seats, the positions of the students at the positions of the seats in the classrooms at the moment are obtained, and the positions of the students at the moment are taken as initial positions.
During the class, students can continuously interact and play, or go to find other friends to play, the positions of the students can be changed, and the actual positions of the students are obtained through video cameras installed in classrooms.
When the student position is changing, the current action is continuously increasing and unstable, or the student position may keep looking at the state during the action, and the final position has uncertainty, in this embodiment, whether the student position is stable is judged by setting a time threshold M, and in this embodiment, the value of M is 10.
When the college A starts to find the college B, the method calculates the action distance of the college A when A, B is stable, namely the positions of the two parties are not changed any more。
At the discretion of classmatesAfter the motion trail, i.e. position, is stable, the classmate +.>Is>. Due to the perspective transformation relation of the known camera coordinate system and the world coordinate system, classmates can be +.>In the monitoring image, the track change is transformed by perspective to obtain the motion track of the monitoring image in the real world, and the classmate +.>Is>。
203. And detecting the head gesture of each target to obtain a visual attention value of each target.
When a student approaches to another student, not only the position of the student changes, but also the head of the student turns to the corresponding direction, and when the student obtains the vision observation posture, the first student in the vision is taken as the object of attention of the student through head posture detection.
Since people sometimes look fast, where one person is concerned with another person who is not well reflected by the head pose, a time threshold n is set in this embodiment to limit whether the current classmate is really looking at a certain classmate, in this embodiment,second.
In the class, the classmate A looks at the classmate BSecondary, wherein each viewing time is +.>Performing head gesture detection on each target to obtain a visual attention value of each target, including:
detecting the head gesture of each target, obtaining the visual residence time of each target when the vision observing gesture is met, and calculating the visual attention value of each targetThe expression is:
wherein ,representing the visual attention value of said object A to object B,>for the number of observations of the object, +.>Indicating the visual retention time of the object at the ith look and feel, < >>Is a preset threshold.
The higher the value is, the classmate is +.>For classmates->Pay more attention to, ->The larger the value of (C) indicates +.>For classmates->The higher the attention of (c). Wherein->Is selected to be seconds.
204. And each target interaction value is obtained through the action distance and the visual attention.
Because the interaction is bidirectional, but the interaction values are different in the interaction process, the interaction value of each target has a direction, when the college A approaches to the college B, and the visual attention point of the college A is the college B, the interaction value is mainly the college A, and the interaction value is expressed as。
Through the postThe action distance and the visual attention are calculated to each target interaction valueComprising:
wherein ,representing the object->For object->Is (are) interactive value>Representing the distance of action of said target A, +.>Representing the visual attention of the target a to the target B.
Because the distance and the visual attention are independent, and do not have the same property, the scheme adopts a multiplication number when calculating the interaction value.
College of classmatesAfter reaching the stable position, the students to be found are determined according to the head gesture, the visual attention of the students is obtained, and the final interaction value has directivity. If the classmate is->After reaching the stable position, which is not of interest to any classmate, the visual attention value is 0, the final classmate +.>The interaction value of (2) is 0, and the method is more in line with the actual scene.
wherein The higher the value of (2) represents +.>Go to find the classmate->The stronger the interactive willingness is, the willingness is to be in charge of the classmate +.>The lower the interaction is, the less the interaction is proved to be, and the intention is not strong. Similarly, a ∈10 can be obtained>Get classmates +.>Willing to and classmates withThe degree of interaction will be strong. And further the interaction value between each classmate and each classmate.
205. And establishing a directed weighted graph according to each target interaction value, and respectively acquiring a paying-out value and a harvesting value of each target interaction value through an adjacent matrix of the directed weighted graph. Establishing a directed weighted graph according to each target interaction value, including:
in the directed weighted graph, each target is taken as a vertex, the vertices are connected in a bidirectional manner, and the target interaction value with the direction is taken as the weight value of the connecting line direction between the vertices.
As shown in fig. 3, a directed weighted graphical illustration in an artificial intelligence-based personal solitary value detection method is provided in an embodiment of the present invention.
The directed weighted graph is built in the class, so that the interaction condition of each classmate to other classmates can be better observed and obtained.
In the graph, each student is used as one vertex in the graph, all the vertices are connected in a full-connection mode, wherein the vertices are connected in a bidirectional mode, and the interaction value with the direction is used as the weight value of the connection line direction of the vertices in the graph.
The arrows represent the interaction direction, each circle being a vertex in the graph and also representing a classmate, the weights of which represent the effect of the interaction value, wherein the out-diffusion indicates that it is paid out, and the pointing indicates that it is harvested.
Interaction is the flow between the students paying out each other and harvesting. The upper limit of each person should be consistent, but the sum of the values lost to the person's impact is not consistent because the person is reluctant to interact.
The interaction value indicates the flow between the mutual paying out and harvesting of the classmates, namely that the visual attention value of the classmate A to the classmate B is high or the action distance of the visual attention value of the classmate A to the classmate B is more, and the interaction value indicates that the classmate A pays out higher in the interaction; likewise, harvest indicates that after classmate A pays out, classmate B's feedback to classmate A, i.e. whether classmate B's interaction value to classmate A is close.
Respectively obtaining a payment value and a harvest value of each target interaction value through the adjacency matrix of the directed weighted graph, wherein the method comprises the following steps:
the payout value is the sum of all values of the row of the adjacency matrix corresponding to each target;
the harvest value is the sum of all values of the column of the adjacency matrix for each of the targets.
206. And obtaining the individual value of each target through the paying-out value and the harvesting value.
Because in the interaction, when the effort of interaction of other people is not proportional to the harvest of other people, the interaction returns to generate the solitary feeling, if the studentsOnce isolated in class, classmates +.>Interaction of other persons in the class is paid, and the other classmates cannot obtain the +.>Is used for the interactive harvest of the seeds.
Obtaining individual values of each target through the paying-out value and the harvesting valueComprising:
wherein ,for each of said targets an orphan value, +.>A payout value for each of said target interaction values, < >>And obtaining a harvest value for each target interaction value.
The greater the difference in (2), the classmate +.>The greater the effort is to harvest. And takes up the difference between the harvest and the payout by the difference>Proportion of->To represent solitary value, when +.>The larger it is, the more intense it is the islanding +.>The larger. I.e. each classmate corresponds to a person belonging to the same +.>。
After passing the payout value and the harvest value, further comprising:
when the solitary value isWhen the target is not isolated, the isolated value is indicated to correspond to the target;
when the solitary value isWhen the target is in the isolated state, the isolated state value is indicated to have isolated feeling corresponding to the target;
the value of P is a percentage and can be negative, and when P is 0, the effort is equal to the harvest, possibly well-related, and equilibrium can be reached, but the effort is also not taken to be active when isolatedIs approximately 0.
The effort is greater than the harvest when P > 0, and less than the harvest when P < 0; when (when)At that time, it is assumed that the student has been passively isolated, representing that all of his efforts have not been harvested.
Constructing a function curve by the single value through an interpolation method, and if the single value is reduced in a fixed time interval K, indicating that the single value can be self-regulated corresponding to the target without help;
if the orphan value does not change or increase within the fixed time interval K, it indicates that the orphan value fails to self-regulate corresponding to the target, and assistance is needed. In this example, k=2 is taken in days.
According to the technical means provided by the invention, the influence of the action track and the visual attention value of the student on the interaction value is considered, whether the individual is solitary or not is quantified, the solitary value is measured through individual paying and individual feedback, so that the measurement result is more accurate, the solitary condition of the student is accurately analyzed, corresponding psychological health coaching is carried out on the student aiming at the solitary values of different degrees, and the feeling of the student is more met.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.