CN114387657B - Personal solitary value detection method based on artificial intelligence - Google Patents

Personal solitary value detection method based on artificial intelligence Download PDF

Info

Publication number
CN114387657B
CN114387657B CN202210045319.0A CN202210045319A CN114387657B CN 114387657 B CN114387657 B CN 114387657B CN 202210045319 A CN202210045319 A CN 202210045319A CN 114387657 B CN114387657 B CN 114387657B
Authority
CN
China
Prior art keywords
value
target
solitary
interaction
artificial intelligence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210045319.0A
Other languages
Chinese (zh)
Other versions
CN114387657A (en
Inventor
尚二朝
杨兰芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Henan Chuangwei Technology Co ltd
Original Assignee
Henan Chuangwei Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Henan Chuangwei Technology Co ltd filed Critical Henan Chuangwei Technology Co ltd
Priority to CN202210045319.0A priority Critical patent/CN114387657B/en
Publication of CN114387657A publication Critical patent/CN114387657A/en
Application granted granted Critical
Publication of CN114387657B publication Critical patent/CN114387657B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/70ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Primary Health Care (AREA)
  • Evolutionary Computation (AREA)
  • Epidemiology (AREA)
  • Psychology (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Hospice & Palliative Care (AREA)
  • Developmental Disabilities (AREA)
  • Child & Adolescent Psychology (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Image Analysis (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention provides a personal solitary value detection method based on artificial intelligence, which relates to the field of artificial intelligence and is mainly used for detecting solitary values of students. Comprising the following steps: acquiring an area video image, acquiring real-time position information of each target in the area video image, and carrying out face recognition on each target; the regional video image comprises at least one target; acquiring action tracks of all targets through real-time position information, and calculating action distances of all targets; detecting the head gesture of each target to obtain a visual attention value of each target; obtaining each target interaction value through the action distance and the visual attention value; establishing a directed weighted graph, and respectively acquiring a paying value and a harvesting value of each target interaction value through an adjacent matrix of the directed weighted graph; and obtaining each target solitary value through the paying-out value and the harvesting value. By the technical means provided by the invention, the solitary value is measured through individual payment and individual feedback, so that the feeling of students is more met, and the measurement result is more accurate.

Description

Personal solitary value detection method based on artificial intelligence
Technical Field
The invention relates to the field of artificial intelligence, in particular to a personal solitary value detection method based on artificial intelligence.
Background
In school education, a solution is needed that can detect the mental health development of students, while assisting teachers in the dispersion of the psychological state of students.
Compared with the conventional passive isolation state monitoring method, whether a person is isolated is judged by not interacting with the person any more by a plurality of persons.
Disclosure of Invention
The invention provides a personal solitary value detection method based on artificial intelligence, which comprises the following steps: acquiring an area video image, acquiring real-time position information of each target in the area video image, and carrying out face recognition on each target; the regional video image comprises at least one target; acquiring action tracks of all targets through real-time position information, and calculating action distances of all targets; detecting the head gesture of each target to obtain a visual attention value of each target; obtaining each target interaction value through the action distance and the visual attention value; establishing a directed weighted graph, and respectively acquiring a paying value and a harvesting value of each target interaction value through an adjacent matrix of the directed weighted graph; and obtaining the solitary value of each target through the payout value and the harvest value.
According to the technical means provided by the invention, the influence of the action track and the visual attention value of the student on the interaction value is considered, whether the individual is solitary or not is quantified, the solitary value is measured through individual paying and individual feedback, so that the measurement result is more accurate, the solitary condition of the student is accurately analyzed, corresponding psychological health coaching is carried out on the student aiming at the solitary values of different degrees, and the feeling of the student is more met.
The invention adopts the following technical scheme that the personal solitary value detection method based on artificial intelligence comprises the following steps:
acquiring an area video image, acquiring real-time position information of each target in the area video image, and carrying out face recognition on each target; the region video image comprises at least one target.
And acquiring the action track of each target through the real-time position information, and calculating the action distance of each target.
And detecting the head gesture of each target to obtain a visual attention value of each target.
And obtaining each target interaction value through the action distance and the visual attention value.
And establishing a directed weighted graph according to each target interaction value, and respectively acquiring a paying-out value and a harvesting value of each target interaction value through an adjacent matrix of the directed weighted graph.
And obtaining the individual value of each target through the paying-out value and the harvesting value.
Further, the personal solitary value detection method based on artificial intelligence further comprises the following steps after passing through the payout value and the harvest value to obtain the solitary value of the target:
when the solitary value isAnd when the target is in the isolated state, the isolated value is indicated to correspond to the target, and no isolated feeling exists.
When the solitary value isWhen the target is in the isolated state, the isolated state value is indicated to have isolated feeling corresponding to the target;
and constructing a function curve by the single value through an interpolation method, and if the single value is reduced in a fixed time interval K, indicating that the single value can be self-regulated corresponding to the target without assistance.
If the orphan value does not change or increase within the fixed time interval K, it indicates that the orphan value fails to self-regulate corresponding to the target, and assistance is needed.
Furthermore, an artificial intelligence-based personal solitary value detection method obtains the solitary value of each target through the payment value and the harvest valueComprising:
wherein ,for each of said targets an orphan value, +.>A payout value for each of said target interaction values, < >>And obtaining a harvest value for each target interaction value.
Further, an artificial intelligence-based personal solitary value detection method is provided, and each target interaction value is obtained through the action distance and the visual attention valueComprising:
wherein ,express goal->For object->Is (are) interactive value>Representing the distance of action of target A, +.>Representing the visual attention of object a to object B.
Further, an artificial intelligence-based personal solitary value detection method for detecting head gestures of each target to obtain visual attention values of each target comprises the following steps:
for each of the targetsHead gesture detection, namely obtaining the visual residence time of each target when the head gesture detection accords with the visual field observation gesture, and calculating the visual attention value of each targetThe expression is:
wherein ,visual attention value representing object a to object B, is->For the number of observations of the object, +.>Indicating the visual retention time of the object at the ith look and feel, < >>Is a preset threshold.
Further, the method for detecting the individual solitary value based on artificial intelligence obtains the action track of each target through the change of the real-time position information of each target, calculates the action distance of each target, and comprises the following steps:
determining the initial position of each target, comparing the real-time position of each target with the initial position by monitoring, obtaining the action track of each target, and performing perspective transformation on the position coordinates of each target in a world coordinate system after the real-time position of each target is stable, so as to obtain the action distance of each target.
Further, the method for detecting the personal solitary value based on the artificial intelligence establishes a directed weighted graph according to each target interaction value, and comprises the following steps:
in the directed weighted graph, each object is taken as a vertex, the vertices are connected in a bidirectional manner, and the object interaction value with the direction is taken as the weight value of the connecting line direction between the vertices.
Further, an artificial intelligence-based personal solitary value detection method, which respectively obtains a paying value and a harvesting value of each target interaction value through the adjacency matrix of the directed weighted graph, comprises the following steps:
the payout value is the sum of all values of the row of the adjacency matrix for each of the targets.
The harvest value is the sum of all values of the column of the adjacency matrix corresponding to each target
The beneficial effects of the invention are as follows: according to the technical means provided by the invention, the influence of the action track and the visual attention value of the student on the interaction value is considered, whether the individual is solitary or not is quantified, the solitary value is measured through individual paying and individual feedback, so that the measurement result is more accurate, the solitary condition of the student is accurately analyzed, corresponding psychological health coaching is carried out on the student aiming at the solitary values of different degrees, and the feeling of the student is more met.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the description below are only some embodiments of the invention, and that other drawings can be obtained according to these drawings without inventive faculty for a person skilled in the art.
FIG. 1 is a schematic flow chart of a personal solitary value detection method based on artificial intelligence in an embodiment of the invention;
FIG. 2 is a flow chart of another method for detecting personal solitary value based on artificial intelligence according to an embodiment of the invention;
fig. 3 is a directed weighted graphical illustration of an artificial intelligence-based personal solitary value detection method in an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
As shown in fig. 1, a personal solitary value detection method based on artificial intelligence according to an embodiment of the present invention is provided, including:
101. acquiring an area video image, acquiring real-time position information of each target in the area video image, and carrying out face recognition on each target; the region video image comprises at least one target.
In this embodiment, the area video is taken as an example of a primary school classroom, and each object in the area video image represents each individual pupil.
In order to make the acquired video image have an analytical meaning, the acquired video comprises at least one target, namely an individual pupil.
When the regional video is collected, cameras are arranged in the primary school classroom and are arranged in front of and behind the primary school classroom, the visual field range covers the whole classroom, and each student individual in the classroom can be shot.
The classroom size is fixed, and the monitoring cameras are installed by using a chessboard method, so that the camera coordinate system and the world coordinate system are corrected, and the position of students in the video can be converted into the real position of the students in reality through the obtained regional video.
102. And acquiring the action track of each target through the real-time position information, and calculating the action distance of each target.
When the students are in class, the students can return to the seats, the positions of the students at the positions of the seats in the classrooms at the moment are obtained, and the positions of the students at the moment are taken as initial positions.
During the class, students can continuously interact and play, or go to find other friends to play, the positions of the students can be changed, and the actual positions of the students are obtained through video cameras installed in classrooms.
When the college A takes upWhen finding the classmate B, respectively calculating the action distance of the classmate A when A, B is stable, i.e. the positions of both sides are not changedAnd action distance of classmate B->
When the student position is changing, the current action is continuously increasing and unstable, or the student may keep looking at the state during the action, and the final position has uncertainty, in this embodiment, whether the student position is stable is judged by setting a time threshold M.
103. And detecting the head gesture of each target to obtain a visual attention value of each target.
When a student approaches to another student, not only the position of the student changes, but also the head of the student turns to the corresponding direction, and when the student obtains the vision observation posture, the first student in the vision is taken as the object of attention of the student through head posture detection.
Since people sometimes look fast, and one person is concerned with another person who is not well reflected by the head posture, a time threshold n is set in this embodiment to limit whether the current colleague really looks at the colleague.
104. And each target interaction value is obtained through the action distance and the visual attention.
Because the interaction is bidirectional, but the interaction values are different in the interaction process, the interaction value of each target has a direction, when the college A approaches to the college B, and the visual attention point of the college A is the college B, the interaction value is mainly the college A, and the interaction value is expressed as
105. And establishing a directed weighted graph according to each target interaction value, and respectively acquiring a paying-out value and a harvesting value of each target interaction value through an adjacent matrix of the directed weighted graph.
The directed weighted graph is built in the class, so that the interaction condition of each classmate to other classmates can be better observed and obtained.
And taking each student as a vertex in the graph, wherein the vertices are connected in a full-connection mode, the vertices are connected in a bidirectional mode, and the interaction value with the direction is taken as the weight value of the connection direction of the vertices in the graph.
The interaction value indicates the flow between the mutual paying out and harvesting of the classmates, namely that the visual attention value of the classmate A to the classmate B is high or the action distance of the visual attention value of the classmate A to the classmate B is more, and the interaction value indicates that the classmate A pays out higher in the interaction; likewise, harvest indicates that after classmate A pays out, classmate B's feedback to classmate A, i.e. whether classmate B's interaction value to classmate A is close.
106. Obtaining individual values of each target through the paying-out value and the harvesting value
Obtaining an orphan value by calculating the relationship between the attention paid and the attention harvested of the student at a certain time interval
Because in the interactive interaction, when the effort of the interaction of other people is not proportional to the harvest of the other people, the interaction returns to generate the solitary feeling. If the classmates areOnce isolated in class, classmates +.>Interaction of other persons in the class is paid, and the other classmates cannot obtain the +.>Is used for the interactive harvest of the seeds.
According to the technical means provided by the invention, the influence of the action track and the visual attention value of the student on the interaction value is considered, whether the individual is solitary or not is quantified, the solitary value is measured through individual paying and individual feedback, so that the measurement result is more accurate, the solitary condition of the student is accurately analyzed, corresponding psychological health coaching is carried out on the student aiming at the solitary values of different degrees, and the feeling of the student is more met.
Example 2
As shown in fig. 2, a personal solitary value detection method based on artificial intelligence according to an embodiment of the present invention is provided, including:
201. acquiring an area video image, acquiring real-time position information of each target in the area video image, and carrying out face recognition on each target; the region video image comprises at least one target.
In order to make the acquired video image have an analytical meaning, the acquired video comprises at least one target, namely an individual pupil.
When the regional video is collected, cameras are arranged in the primary school classroom and are arranged in front of and behind the primary school classroom, the visual field range covers the whole classroom, and each student individual in the classroom can be shot.
The classroom size is fixed, and the monitoring cameras are installed by using a chessboard method, so that the camera coordinate system and the world coordinate system are corrected, and the position of students in the video can be converted into the real position of the students in reality through the obtained regional video.
Face data of students in a class are collected and stored in a database, and the collected face database is trained through a CNN neural network, so that the face database has a face recognition function, identities of corresponding classmates can be identified from monitoring, and the face recognition data is obtained under legal conditions.
202. And acquiring the action track of each target through the real-time position information, and calculating the action distance of each target.
Acquiring action tracks of the targets through the change of the real-time position information of the targets, and calculating action distances of the targets, wherein the method comprises the following steps:
determining the initial position of each target, comparing the real-time position of each target with the initial position by monitoring, obtaining the action track of each target, and performing perspective transformation on the position coordinates of each target in a world coordinate system after the real-time position of each target is stable, so as to obtain the action distance of each target.
When the students are in class, the students can return to the seats, the positions of the students at the positions of the seats in the classrooms at the moment are obtained, and the positions of the students at the moment are taken as initial positions.
During the class, students can continuously interact and play, or go to find other friends to play, the positions of the students can be changed, and the actual positions of the students are obtained through video cameras installed in classrooms.
When the student position is changing, the current action is continuously increasing and unstable, or the student position may keep looking at the state during the action, and the final position has uncertainty, in this embodiment, whether the student position is stable is judged by setting a time threshold M, and in this embodiment, the value of M is 10.
When the college A starts to find the college B, the method calculates the action distance of the college A when A, B is stable, namely the positions of the two parties are not changed any more
At the discretion of classmatesAfter the motion trail, i.e. position, is stable, the classmate +.>Is>. Due to the perspective transformation relation of the known camera coordinate system and the world coordinate system, classmates can be +.>In the monitoring image, the track change is transformed by perspective to obtain the motion track of the monitoring image in the real world, and the classmate +.>Is>
203. And detecting the head gesture of each target to obtain a visual attention value of each target.
When a student approaches to another student, not only the position of the student changes, but also the head of the student turns to the corresponding direction, and when the student obtains the vision observation posture, the first student in the vision is taken as the object of attention of the student through head posture detection.
Since people sometimes look fast, where one person is concerned with another person who is not well reflected by the head pose, a time threshold n is set in this embodiment to limit whether the current classmate is really looking at a certain classmate, in this embodiment,second.
In the class, the classmate A looks at the classmate BSecondary, wherein each viewing time is +.>Performing head gesture detection on each target to obtain a visual attention value of each target, including:
detecting the head gesture of each target, obtaining the visual residence time of each target when the vision observing gesture is met, and calculating the visual attention value of each targetThe expression is:
wherein ,representing the visual attention value of said object A to object B,>for the number of observations of the object, +.>Indicating the visual retention time of the object at the ith look and feel, < >>Is a preset threshold.
The higher the value is, the classmate is +.>For classmates->Pay more attention to, ->The larger the value of (C) indicates +.>For classmates->The higher the attention of (c). Wherein->Is selected to be seconds.
204. And each target interaction value is obtained through the action distance and the visual attention.
Because the interaction is bidirectional, but the interaction values are different in the interaction process, the interaction value of each target has a direction, when the college A approaches to the college B, and the visual attention point of the college A is the college B, the interaction value is mainly the college A, and the interaction value is expressed as
Through the postThe action distance and the visual attention are calculated to each target interaction valueComprising:
wherein ,representing the object->For object->Is (are) interactive value>Representing the distance of action of said target A, +.>Representing the visual attention of the target a to the target B.
Because the distance and the visual attention are independent, and do not have the same property, the scheme adopts a multiplication number when calculating the interaction value.
College of classmatesAfter reaching the stable position, the students to be found are determined according to the head gesture, the visual attention of the students is obtained, and the final interaction value has directivity. If the classmate is->After reaching the stable position, which is not of interest to any classmate, the visual attention value is 0, the final classmate +.>The interaction value of (2) is 0, and the method is more in line with the actual scene.
wherein The higher the value of (2) represents +.>Go to find the classmate->The stronger the interactive willingness is, the willingness is to be in charge of the classmate +.>The lower the interaction is, the less the interaction is proved to be, and the intention is not strong. Similarly, a ∈10 can be obtained>Get classmates +.>Willing to and classmates withThe degree of interaction will be strong. And further the interaction value between each classmate and each classmate.
205. And establishing a directed weighted graph according to each target interaction value, and respectively acquiring a paying-out value and a harvesting value of each target interaction value through an adjacent matrix of the directed weighted graph. Establishing a directed weighted graph according to each target interaction value, including:
in the directed weighted graph, each target is taken as a vertex, the vertices are connected in a bidirectional manner, and the target interaction value with the direction is taken as the weight value of the connecting line direction between the vertices.
As shown in fig. 3, a directed weighted graphical illustration in an artificial intelligence-based personal solitary value detection method is provided in an embodiment of the present invention.
The directed weighted graph is built in the class, so that the interaction condition of each classmate to other classmates can be better observed and obtained.
In the graph, each student is used as one vertex in the graph, all the vertices are connected in a full-connection mode, wherein the vertices are connected in a bidirectional mode, and the interaction value with the direction is used as the weight value of the connection line direction of the vertices in the graph.
The arrows represent the interaction direction, each circle being a vertex in the graph and also representing a classmate, the weights of which represent the effect of the interaction value, wherein the out-diffusion indicates that it is paid out, and the pointing indicates that it is harvested.
Interaction is the flow between the students paying out each other and harvesting. The upper limit of each person should be consistent, but the sum of the values lost to the person's impact is not consistent because the person is reluctant to interact.
The interaction value indicates the flow between the mutual paying out and harvesting of the classmates, namely that the visual attention value of the classmate A to the classmate B is high or the action distance of the visual attention value of the classmate A to the classmate B is more, and the interaction value indicates that the classmate A pays out higher in the interaction; likewise, harvest indicates that after classmate A pays out, classmate B's feedback to classmate A, i.e. whether classmate B's interaction value to classmate A is close.
Respectively obtaining a payment value and a harvest value of each target interaction value through the adjacency matrix of the directed weighted graph, wherein the method comprises the following steps:
the payout value is the sum of all values of the row of the adjacency matrix corresponding to each target;
the harvest value is the sum of all values of the column of the adjacency matrix for each of the targets.
206. And obtaining the individual value of each target through the paying-out value and the harvesting value.
Because in the interaction, when the effort of interaction of other people is not proportional to the harvest of other people, the interaction returns to generate the solitary feeling, if the studentsOnce isolated in class, classmates +.>Interaction of other persons in the class is paid, and the other classmates cannot obtain the +.>Is used for the interactive harvest of the seeds.
Obtaining individual values of each target through the paying-out value and the harvesting valueComprising:
wherein ,for each of said targets an orphan value, +.>A payout value for each of said target interaction values, < >>And obtaining a harvest value for each target interaction value.
The greater the difference in (2), the classmate +.>The greater the effort is to harvest. And takes up the difference between the harvest and the payout by the difference>Proportion of->To represent solitary value, when +.>The larger it is, the more intense it is the islanding +.>The larger. I.e. each classmate corresponds to a person belonging to the same +.>
After passing the payout value and the harvest value, further comprising:
when the solitary value isWhen the target is not isolated, the isolated value is indicated to correspond to the target;
when the solitary value isWhen the target is in the isolated state, the isolated state value is indicated to have isolated feeling corresponding to the target;
the value of P is a percentage and can be negative, and when P is 0, the effort is equal to the harvest, possibly well-related, and equilibrium can be reached, but the effort is also not taken to be active when isolatedIs approximately 0.
The effort is greater than the harvest when P > 0, and less than the harvest when P < 0; when (when)At that time, it is assumed that the student has been passively isolated, representing that all of his efforts have not been harvested.
Constructing a function curve by the single value through an interpolation method, and if the single value is reduced in a fixed time interval K, indicating that the single value can be self-regulated corresponding to the target without help;
if the orphan value does not change or increase within the fixed time interval K, it indicates that the orphan value fails to self-regulate corresponding to the target, and assistance is needed. In this example, k=2 is taken in days.
According to the technical means provided by the invention, the influence of the action track and the visual attention value of the student on the interaction value is considered, whether the individual is solitary or not is quantified, the solitary value is measured through individual paying and individual feedback, so that the measurement result is more accurate, the solitary condition of the student is accurately analyzed, corresponding psychological health coaching is carried out on the student aiming at the solitary values of different degrees, and the feeling of the student is more met.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, alternatives, and improvements that fall within the spirit and scope of the invention.

Claims (8)

1. The personal solitary value detection method based on artificial intelligence is characterized by comprising the following steps:
acquiring an area video image, acquiring real-time position information of each target in the area video image, and carrying out face recognition on each target; the regional video image comprises at least one target;
acquiring action tracks of the targets through the real-time position information, and calculating action distances of the targets;
performing head gesture detection on each target to obtain a visual attention value of each target;
obtaining each target interaction value through the action distance and the visual attention value;
establishing a directed weighted graph according to each target interaction value, and respectively obtaining a paying-out value and a harvesting value of each target interaction value through an adjacent matrix of the directed weighted graph;
and obtaining the individual value of each target through the paying-out value and the harvesting value.
2. The personal solitary value detection method based on artificial intelligence of claim 1, wherein after passing the payout value and the harvest value, further comprising:
when the solitary value isWhen the target is not isolated, the isolated value is indicated to correspond to the target;
when the solitary value isWhen the target is in the isolated state, the isolated state value is indicated to have isolated feeling corresponding to the target;
constructing a function curve by the single value through an interpolation method, and if the single value is reduced in a fixed time interval K, indicating that the single value can be self-regulated corresponding to the target without help;
if the orphan value does not change or increase within the fixed time interval K, it indicates that the orphan value fails to self-regulate corresponding to the target, and assistance is needed.
3. The personal solitary value detection method based on artificial intelligence according to claim 1, wherein the solitary value of each target is obtained through the paid value and the harvest valueComprising:
wherein ,for each of said targets an orphan value, +.>A payout value for each of said target interaction values, < >>And obtaining a harvest value for each target interaction value.
4. The artificial intelligence based personal solitary value detection method of claim 1, wherein each of said target interaction values is determined by said action distance and said visual attention valueComprising:
wherein ,express goal->For object->Is (are) interactive value>Representing the distance of action of target A, +.>Representing the visual attention of object a to object B.
5. The personal solitary unit value detection method based on artificial intelligence of claim 1, wherein the head posture detection is carried out on each target to obtain a visual attention value of each target, and the method comprises the following steps:
detecting the head gesture of each target, obtaining the visual residence time of each target when the vision observing gesture is met, and calculating the visual attention value of each targetThe expression is:
wherein ,visual attention value representing object a to object B, is->For the number of observations of the object, +.>Indicating the visual retention time of the object at the ith look and feel, < >>Is a preset threshold.
6. The personal solitary unit value detection method based on artificial intelligence according to claim 1, wherein the steps of obtaining the action track of each target through the change of the real-time position information of each target and calculating the action distance thereof comprise the following steps:
determining the initial position of each target, comparing the real-time position of each target with the initial position by monitoring, obtaining the action track of each target, and performing perspective transformation on the position coordinates of each target in a world coordinate system after the real-time position of each target is stable, so as to obtain the action distance of each target.
7. The personal solitary unit value detection method based on artificial intelligence of claim 1, wherein the building of the directed weighted graph according to each target interaction value comprises the following steps:
in the directed weighted graph, each object is taken as a vertex, the vertices are connected in a bidirectional manner, and the object interaction value with the direction is taken as the weight value of the connecting line direction between the vertices.
8. The method for detecting personal solitary unit value based on artificial intelligence according to claim 1, wherein the step of obtaining the paying value and the harvesting value of each target interaction value through the adjacency matrix of the directed weighted graph comprises the following steps:
the payout value is the sum of all values of the row of the adjacency matrix corresponding to each target;
the harvest value is the sum of all values of the column of the adjacency matrix for each of the targets.
CN202210045319.0A 2022-01-15 2022-01-15 Personal solitary value detection method based on artificial intelligence Active CN114387657B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210045319.0A CN114387657B (en) 2022-01-15 2022-01-15 Personal solitary value detection method based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210045319.0A CN114387657B (en) 2022-01-15 2022-01-15 Personal solitary value detection method based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN114387657A CN114387657A (en) 2022-04-22
CN114387657B true CN114387657B (en) 2023-09-22

Family

ID=81202667

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210045319.0A Active CN114387657B (en) 2022-01-15 2022-01-15 Personal solitary value detection method based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN114387657B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116823511B (en) * 2023-08-30 2024-01-09 北京中科心研科技有限公司 Method and device for identifying social isolation state of user and wearable device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109873812A (en) * 2019-01-28 2019-06-11 腾讯科技(深圳)有限公司 Method for detecting abnormality, device and computer equipment
CN111820880A (en) * 2020-07-16 2020-10-27 深圳鞠慈云科技有限公司 Campus overlord early warning system and method
CN112686462A (en) * 2021-01-06 2021-04-20 广州视源电子科技股份有限公司 Student portrait-based anomaly detection method, device, equipment and storage medium
CN113128383A (en) * 2021-04-07 2021-07-16 杭州海宴科技有限公司 Recognition method for campus student cheating behavior
CN113544681A (en) * 2019-01-21 2021-10-22 比特梵德知识产权管理有限公司 Anti-network spoofing system and method
CN113553351A (en) * 2021-06-07 2021-10-26 江苏师范大学 Class deception subject-object recognition and deception probability ordering method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113544681A (en) * 2019-01-21 2021-10-22 比特梵德知识产权管理有限公司 Anti-network spoofing system and method
CN109873812A (en) * 2019-01-28 2019-06-11 腾讯科技(深圳)有限公司 Method for detecting abnormality, device and computer equipment
CN111820880A (en) * 2020-07-16 2020-10-27 深圳鞠慈云科技有限公司 Campus overlord early warning system and method
CN112686462A (en) * 2021-01-06 2021-04-20 广州视源电子科技股份有限公司 Student portrait-based anomaly detection method, device, equipment and storage medium
CN113128383A (en) * 2021-04-07 2021-07-16 杭州海宴科技有限公司 Recognition method for campus student cheating behavior
CN113553351A (en) * 2021-06-07 2021-10-26 江苏师范大学 Class deception subject-object recognition and deception probability ordering method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Liang Ye, Tong Liu.Campus Violence Detection Based on Artificial Intelligent Interpretation of Surveillance Video Sequences.《AI Interpretation of Satellite,Aerial,Ground,and Underwater Image and Video Sequences》.2021,全文. *
复杂网络环境下基于信任传递的推荐模型研究;李慧;马小平;施;李存华;仲兆满;蔡虹;;自动化学报(第02期);全文 *

Also Published As

Publication number Publication date
CN114387657A (en) 2022-04-22

Similar Documents

Publication Publication Date Title
CN111709409B (en) Face living body detection method, device, equipment and medium
CN108154075A (en) The population analysis method learnt via single
Thar et al. A proposal of yoga pose assessment method using pose detection for self-learning
CN109376637A (en) Passenger number statistical system based on video monitoring image processing
CN112287891B (en) Method for evaluating learning concentration through video based on expression behavior feature extraction
CN102982340A (en) Target tracking method based on semi-supervised learning and random fern classifier
CN110826453A (en) Behavior identification method by extracting coordinates of human body joint points
Ma et al. Research and Analysis of Sports Training Real‐Time Monitoring System Based on Mobile Artificial Intelligence Terminal
Zhao et al. Real-time head orientation estimation using neural networks
CN106454108B (en) Track up method, apparatus and electronic equipment based on artificial intelligence
CN114387657B (en) Personal solitary value detection method based on artificial intelligence
Jianbang et al. Real-time monitoring of physical education classroom in colleges and universities based on open IoT and cloud computing
Liu et al. Gesture Recognition for UAV-based Rescue Operation based on Deep Learning.
CN115909839A (en) Medical education training assessment system and method based on VR technology
CN116030519A (en) Learning attention detection and assessment method for live broadcast teaching platform
CN115100744A (en) Badminton game human body posture estimation and ball path tracking method
Ding et al. Machine learning model for feature recognition of sports competition based on improved TLD algorithm
Pillai Student Engagement Detection in Classrooms through Computer Vision and Deep Learning: A Novel Approach Using YOLOv4
KR20230086874A (en) Rehabilitation training system using 3D body precision tracking technology
Chen et al. [Retracted] Impact of Sports Wearable Testing Equipment Based on Vision Sensors on the Sports Industry
CN112633261A (en) Image detection method, device, equipment and storage medium
Zhong A convolutional neural network based online teaching method using edge-cloud computing platform
Li et al. [Retracted] Intelligent Correction Method of Shooting Action Based on Computer Vision
CN112766150A (en) School classroom student learning behavior tracking analysis method based on big data and artificial intelligence and cloud management platform
CN107194406A (en) A kind of panorama machine vision target identification method based on CS characteristic values

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230607

Address after: No. 2101, Building C-HH-7, Ruikai International Plot C and D-1, north of Yisan Road, Nanguan District, Changchun City, Jilin Province, 130000

Applicant after: Changchun Chunxiu Technology Co.,Ltd.

Address before: 200433 No. 220, Handan Road, Shanghai, Yangpu District

Applicant before: Shang Erchao

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230824

Address after: Building 201, Unit 2, No. 5, Shuixie, Southwest Bank, Weidu Street and Kunwu Road Intersection, Urban Rural Integration Demonstration Zone, Puyang City, Henan Province, 457000

Applicant after: Henan Chuangwei Technology Co.,Ltd.

Address before: No. 2101, Building C-HH-7, Ruikai International Plot C and D-1, north of Yisan Road, Nanguan District, Changchun City, Jilin Province, 130000

Applicant before: Changchun Chunxiu Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant