CN112784714A - Real-time detection system and method for reading attention of user group based on library shared space - Google Patents

Real-time detection system and method for reading attention of user group based on library shared space Download PDF

Info

Publication number
CN112784714A
CN112784714A CN202110033487.3A CN202110033487A CN112784714A CN 112784714 A CN112784714 A CN 112784714A CN 202110033487 A CN202110033487 A CN 202110033487A CN 112784714 A CN112784714 A CN 112784714A
Authority
CN
China
Prior art keywords
shared space
user group
stripe
processing system
computer processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110033487.3A
Other languages
Chinese (zh)
Inventor
王婧怡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu University
Original Assignee
Jiangsu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu University filed Critical Jiangsu University
Priority to CN202110033487.3A priority Critical patent/CN112784714A/en
Publication of CN112784714A publication Critical patent/CN112784714A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a real-time detection system and a real-time detection method for reading attention of a user group based on a library shared space, wherein the detection system is arranged in the shared space and comprises a stripe projection amplification unit, an image acquisition unit and a computer processing system; the fringe projection amplifying unit projects fringe beams towards a user group in the shared space; the image acquisition unit acquires a user group stripe modulation image containing stripe light beams in a shared space, and inputs the acquired stripe modulation image into a computer processing system; the computer processing system is used for regulating and controlling the fringe projection amplification unit, and the computer processing system processes the acquired fringe modulation image to extract the number N of motion areas and the average motion amplitude value
Figure DDA0002892535730000011
And average shape limb change velocity
Figure DDA0002892535730000012
Through the number N of motion areas and the average motion amplitude
Figure DDA0002892535730000013
And average shape limb change velocity
Figure DDA0002892535730000014
And characterizing the reading attention of the user group in the shared space.

Description

Real-time detection system and method for reading attention of user group based on library shared space
Technical Field
The invention belongs to the technical field of new-generation information, and particularly relates to a system and a method for detecting reading attention in real time based on shape and limb change recognition of a user group in a library shared space.
Background
With the development of information technology, the technology of internet, new media, big data, artificial intelligence, mobile equipment and the like is revolutionary, the service requirements and resource requirements of a user on a library are changed, and a plurality of new attempts for college library study shared space construction are developed. The library learning shared space becomes the most important place for the user to contact with the library, the application fields are impacted by the occurrence of artificial intelligence at present, and a library service system of colleges and universities changes from 'digital service 1.0' taking literature information service as a mark to 'intelligent service 2.0' with emphasis on mining collection resources. Different scene constructions in the intelligent space often affect the reading effect of the user, and different reading states of different users have different requirements on the space scene constructions. Therefore, the detection and identification of the reading state of the user in the intelligent space of the library become an important research content for improving the service work efficiency of the library.
The reading attention of the users in the shared space of the library is an important index for measuring the emotion and value orientation between the users and the library, whether the library has high service efficiency, high service quality and thick academic atmosphere can be judged from the emotional response of the users, and good reading attention can be given to good emotion, so that a good reading effect can be achieved. The reading attention of a user is closely related to the shape and limb posture and the change of the shape and limb posture, namely the expression of the user, the expression of people mainly has three different expression modes, namely a tone expression, a limb expression and a facial expression, wherein the facial expression is one of the most beneficial and the most convenient modes for mutually communicating emotion, cognition, intention and viewpoint between people, and therefore some face and eye related detection technologies appear. For example: the learning state identification method based on the outlier detection technology under the data mining theory distributes limited education resources to the most urgently required students. Mining examination result data of students by using a local outlier detection algorithm based on density to find out suspicious outlier students, and then analyzing learning states of the suspicious outlier students; if the high-definition spherical camera controlled by a cloud platform and capable of zooming quickly identifies faces of students in a classroom, all the students in the classroom can be quickly identified by the faces, the state of listening to the classroom is continuously scanned, the acquired image video data is uploaded to a cloud server in real time for analysis, the structured data is subjected to big data processing, a statistical result is output, background visual presentation is carried out, meanwhile, the system is bound with WeChat or QQ of the students, and service functions such as real-time state reminding during class can be realized; then, if a video monitoring technology, a Java programming technology, an image processing technology and a statistical analysis technology are utilized, a learning state detection technology is carried out through the face state, and the technology can classify and identify four classes of the student, namely, seriously class, stutter, sleep and mobile phone playing; the technology provides an Eye Movement Analysis algorithm based on RNN-EMA (RNN-Eye Movement Analysis), and the algorithm predicts the learning behavior of students by analyzing sequence Eye Movement vectors to complete the detection of the current learning state; for example, the learning fatigue recognition and intervention method based on the facial expressions is characterized in that the method decomposes the expression characteristics of the user into different subspaces, recognizes in an expression subspace, and finally recognizes the expressions in an intelligent learning space; in addition, the student learning concentration degree analysis based on the micro expression is also carried out, and classification is carried out through the micro expression of eyes and faces of the user. The reading attention of the user is closely related to the emotion of the user, the expression of the user is always accompanied with the action of limbs of the human body, the joy of the user is chorea, and the user has a relaxed feeling of the head when the user is sad. The expression of emotional, postural movements is called emotional body language by researchers, and studies of emotional body language show that the brain activates the somatic stimulation brain areas similarly to facial stimulation when recognizing somatic stimulation and non-somatic, neutral stimuli. The emotion places the body at the core of emotion information processing, and both emotion experience and emotion processing are considered to be inseparable from somatosensory resources. Therefore, the learners propose a pixel processing method for the mathematical difference of the video image, and the method can identify the body motion state change in the reading process of the user so as to judge the basic emotion of the user. And if the students capture the coordinates of the posture joint points of the users through video images and score the coordinates by comparing with the standard posture, so as to measure the current posture emotion. Therefore, the posture and the change of the user limb can be used as a basis for recognizing the emotional state of the user, thereby reflecting the reading attention of the user. From the above research and analysis, it can be seen that whether the detection is face detection or facial expression detection, if the detection is only satisfied for detecting a still picture, the detection is difficult to be applied to real-time detection of library users, and so is the emotion detection system. In the above method, a large determination error occurs by using only the two-dimensional state of the user's body as a determination basis.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a system and a method for detecting the reading attention of a user group based on a library shared space, which can accurately acquire the reading state of a user based on shape and limb change recognition.
The technical scheme adopted by the invention is as follows:
a real-time detection system for the reading attention of a user group based on a library shared space is disclosed, wherein the detection system is arranged in the shared space and comprises a stripe projection amplification unit, an image acquisition unit and a computer processing system; the fringe projection amplifying unit projects fringe beams towards a user group in a shared space; the image acquisition unit acquires a user group stripe modulation image containing stripe light beams in a shared space, and inputs the acquired stripe modulation image into a computer processing system; the computer processing system is used for regulating and controlling the fringe projection amplification unit, and the computer processing system processes the acquired fringe modulation image to extract the number N of motion areas and the average motion amplitude
Figure BDA0002892535710000021
And average shape limb change velocity
Figure BDA0002892535710000022
Through the number N of motion areas and the average motion amplitude
Figure BDA0002892535710000023
And average shape limb change velocity
Figure BDA0002892535710000024
And characterizing the reading attention of the user group in the shared space.
Further, the fringe projection amplifying unit comprises a laser light source, a grating generator and a light amplification projector; the laser light source, the grating generator and the light amplification projector are sequentially arranged above the shared space from top to bottom, and the laser light source, the grating generator and the light amplification projector are positioned on the same optical axis.
Further, the laser light source is sequentially connected with the time schedule controller and the computer processing system through signal lines;
further, the image acquisition unit comprises a camera, the camera is positioned beside the light amplification projector, and the camera is connected with the computer processing system through a signal line; the computer processing system and the time sequence controller synchronously time sequence and frame-by-frame extract the stripe image.
Further, processing the stripe modulation image in the computer processing system to obtain the number N of motion areas and the average motion amplitude value
Figure BDA0002892535710000031
And average shape limb change velocity
Figure BDA0002892535710000032
And respectively presetting the number N of motion areas and the average motion amplitude value in a computer processing system
Figure BDA0002892535710000033
And average shape limb change velocity
Figure BDA0002892535710000034
The threshold value of (2).
A real-time detection method for reading attention of a user group based on a library shared space comprises the following steps:
and S1, acquiring a user group stripe modulation image containing stripe light beams in the shared space by using a detection system, sequentially preprocessing two adjacent frames of stripe modulation images, performing mathematical difference operation on the two processed frames of stripe modulation images, and performing high-pass filtering on the images after the operation to form a primary result image.
S2, calculating the area, the number, the pixel duty ratio and the sampling frequency of the stripe regions in the preliminary result image to obtain the space-time distribution parameters related to the posture change of the user limbs in the image, wherein the space-time distribution parameters comprise the number N of the motion regions and the average motion amplitude value
Figure BDA0002892535710000035
And average shape limb change velocity
Figure BDA0002892535710000036
S3, setting the number N of motion areas and the average motion amplitude value in the computer processing system
Figure BDA0002892535710000037
And average shape limb change velocity
Figure BDA0002892535710000038
Will set the motion region N, the average motion amplitude
Figure BDA0002892535710000039
And average shape limb change velocity
Figure BDA00028925357100000310
And comparing the parameters with the judgment threshold respectively, wherein the comparison result of the parameters and the threshold represents the reading attention of the user group in the shared space.
Further, the preprocessing comprises monochrome filtering processing and bivaluated stripe processing, and the distribution functions of the two images after the bivaluated stripe processing are respectively H1(x,y),H2(x,y)(ii) a And the difference value of the two images is processed to obtain a difference value function: Δ H(x,y)=H2(x,y)-H1(x,y)The area with the difference value of 1 represents the shape limb change area, and the area with the difference value of 0 represents the shape limb non-change area.
Further, the method for obtaining the motion region N comprises: for Δ H(x,y)The continuous area with function value of 1 is taken as the contour line and is processedThe number N is obtained by line counting.
Further, a defined average motion amplitude is obtained of
Figure BDA00028925357100000311
Wherein, FiRepresenting the amplitude of the variation of the ith limb, MiDenoted as the ith pixel.
Further, the average shape limb change speed is
Figure BDA00028925357100000312
SiThe velocity of the limb change for the ith motion zone.
The invention has the beneficial effects that:
1. according to the reading attention real-time detection system based on the library shared space user group shape limb change recognition, the three-dimensional change of the user shape limb can be effectively highlighted through modulation of grating illumination light, and effective and accurate light information is provided for user reading attention detection.
2. According to the reading attention real-time detection system based on the shape limb change recognition of the user group in the shared space of the library, the grating stripe image can be amplified by using the optical amplification projector, and the full-field state detection of the user group in the shared space of the library can be realized.
3. The reading attention real-time detection system based on the recognition of the shape limb change of the user group in the shared space of the library can realize time-sharing stripe image irradiation in a time-sharing mode through the time sequence controller, so that the online real-time frame-sharing detection of the reading attention state of the user group in the library is realized.
4. According to the reading attention real-time detection system based on the library shared space user group shape limb change recognition, the stripe image is shot through the camera under the control of the time sequence, and the irradiated light is monochromatic light, so that the stripe image can be accurately extracted without influencing the reading of a user.
5. According to the reading attention real-time detection system based on the shape limb change recognition of the library shared space user group, the computer processing system is used for calculating the area, the number, the pixel duty ratio and the sampling frequency of the stripe region in the stripe image, and the three-level reading attention dynamic state of the user group can be accurately obtained according to the relation between the posture change of the user shape limb and the reading attention state, so that a basis is provided for the intelligent control of the library shared space, and the service benefits of the library are greatly improved.
Drawings
FIG. 1 is a schematic view of a real-time reading attention detection system according to the present invention;
in the figure, 1, a laser light source; 2. a grating generator; 3. a light-magnifying projector; 4. a time schedule controller; 5. a camera; 6. a computer processing system.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
As shown in fig. 1, a system for detecting reading attention of a user group in a shared space in real time based on a library is provided, wherein the detection system is disposed in the shared space; the detection system comprises a laser light source 1, a grating generator 2, a light amplification projector 3, a time sequence controller 4, a camera 5 and a computer processing system 6. The laser light source 1, the grating generator 2 and the light amplification projector 3 are sequentially arranged above a shared space from top to bottom, the laser light source 1, the grating generator 2 and the light amplification projector 3 are positioned on the same optical axis, a fringe projection amplification unit is formed by the laser light source 1, the grating generator 2 and the light amplification projector 3, and collimated light beams emitted by the laser light source 1 pass through the grating generator 2 and the light amplification projector 3 and then irradiate a user group object for imaging; the fringe projection amplifying system projects the amplified fringe beams toward the user group objects in the shared space. The laser light source 1 is sequentially connected with the time sequence controller 4 and the computer processing system 6 through signal lines; the stripe light beam generated by the laser light source 1 is controlled by the switch of the time sequence controller 4; the computer processing system 6 controls the continuous light of the laser light source 1 in a time-sharing way through the time schedule controller 4; more specifically, the timing controller 4 performs pulse width control on the irradiation light passing through the grating generator 2, and the width is adjustable according to the requirement of the detection frequency.
The camera 5 is positioned beside the light amplification projector 3 and is used for shooting a stripe modulation image of a user group object containing stripe light beams in the shared space; the camera 5 is connected with a computer processing system 6 through a signal wire; the computer processing system 6 extracts the fringe image frame by frame in synchronization with the timing controller 4.
The computer processing system 6 controls the grating stripe image to be extracted in a frame-by-frame mode, processes the obtained user group reading limb posture image under stripe modulation, extracts two images from the image of the required attention time period to perform parameter processing and calculation such as mathematical difference, filtering, contour lines, image number areas, number and the like, and performs user group reading attention recognition and classification according to the relation between the reading attention and the user limb posture, thereby realizing the purpose of the technology.
The application also provides a real-time reading attention detection method based on the library shared space user group, which comprises the following steps:
and S1, the computer processing system sequentially performs monochromatic filtering processing and two-valued fringe processing on two adjacent frames of images acquired by the camera 5, performs mathematical difference operation on the two processed fringe modulation images, and forms a primary result image after the high-pass filtering of the images after the operation. The specific operation is illustrated in two images as follows:
the monochromatic filtering treatment is to make the wavelength in the color image be non-lambda0Is given a value of 0, λ0Is the laser wavelength; then, carrying out conventional binarization processing, and setting the distribution functions of the two processed images as H1(x,y),H2(x,y)(ii) a The difference value of the two images is processed to obtain the difference function of delta H(x,y)=H2(x,y)-H1(x,y)Wherein the area with the difference value of 1 represents the shape limb change area, and the area with the difference value of 0 represents the shape limb no change area.
And S2, calculating the area, the number, the pixel duty ratio and the sampling frequency of the stripe regions in the preliminary result image by the computer processing system to obtain the main parameters of the space-time distribution of the posture change of the user limb in the image. The specific operation is described as follows:
for Δ H(x,y)The function value is 1, the continuous area is taken as the contour line, and the continuous area is counted to obtain the number N, the parameter is the number of the parts which represent the change of the limbs, and is called the movement area, and the number reflects the degree of the agitation of the user group.
Let the amplitude of the change of the shape limb be FiAmplitude of change F due to limb shapeiProportional to the number M of pixels in the image with a value of 1 in the motion region, whereby an average motion amplitude is defined as
Figure BDA0002892535710000051
Wherein, FiRepresenting the amplitude of the variation of the ith limb, MiDenoted as the ith pixel.
Let the change speed of the shape limb of the ith motion area be SiThen the change speed of the shape limb of the motion area is represented as Si=Fi/ΔtiWherein, Δ tiThe time interval is sampled for a frame, from which an average shape change rate of
Figure BDA0002892535710000061
S3, setting the number N of motion areas and the average motion amplitude value in the computer processing system
Figure BDA0002892535710000062
And average shape limb change velocity
Figure BDA0002892535710000063
Will set the motion region N, the average motion amplitude
Figure BDA0002892535710000064
And average shape limb change velocity
Figure BDA0002892535710000065
Respectively comparing the parameters with the judgment threshold values, and representing the shared space by the comparison results of the parameters and the threshold valuesThe reading attention of the user group specifically includes:
setting a minimum threshold N for a motion region N1And a maximum threshold value N2And N is1<N2(ii) a When N is less than N1The change of the motion area is small, and the attention is high; when N is present1<N<N2The change of the motion area is moderate, and the attention is general; when N is present2N indicates that the movement area has large change and low attention;
setting average motion amplitude
Figure BDA0002892535710000066
Is minimum threshold value F1And a maximum threshold value F2And F is1<F2(ii) a When in use
Figure BDA0002892535710000067
The motion amplitude is small, and the attention is high; when in use
Figure BDA0002892535710000068
The motion amplitude is moderate, and the attention is general; when in use
Figure BDA0002892535710000069
The motion amplitude is larger, and the attention is low;
setting average shape limb change speed
Figure BDA00028925357100000614
Minimum threshold S1And a maximum threshold value S2And S is1<S2(ii) a When in use
Figure BDA00028925357100000610
The change of the shape limb is small, and the attention is high;
Figure BDA00028925357100000611
the shape and limb changes are moderate, and the attention is general; when in use
Figure BDA00028925357100000612
Indicating that the limbs are changed greatlyThe degree of note is low. Namely negative, neutral and positive, respectively setting threshold values for parameters (N, F, S) for indicating poor, medium and good reading attention of users according to the relation between the reading attention and the change of the shape and limb grades of the users (see the following table), and carrying out classification processing according to the threshold values to obtain the result of the reading attention of the final user group.
Relationship between reading state and reading attention based on stripe graph analysis
Figure BDA00028925357100000613
The above embodiments are only used for illustrating the design idea and features of the present invention, and the purpose of the present invention is to enable those skilled in the art to understand the content of the present invention and implement the present invention accordingly, and the protection scope of the present invention is not limited to the above embodiments. Therefore, all equivalent changes and modifications made in accordance with the principles and concepts disclosed herein are intended to be included within the scope of the present invention.

Claims (10)

1. A real-time detection system for the reading attention of a user group based on a library shared space is characterized in that the detection system is arranged in the shared space and comprises a stripe projection amplification unit, an image acquisition unit and a computer processing system (6); the fringe projection amplifying unit projects fringe beams towards a user group in a shared space; the image acquisition unit acquires a user group stripe modulation image containing stripe light beams in a shared space, and simultaneously inputs the acquired stripe modulation image into a computer processing system (6); the computer processing system (6) is used for regulating and controlling the fringe projection amplification unit, and the computer processing system (6) processes the acquired fringe modulation image to extract the number N of motion areas and the average motion amplitude
Figure FDA0002892535700000011
And average shape limb change velocity
Figure FDA0002892535700000012
By movement ofNumber of regions N, average motion amplitude
Figure FDA0002892535700000013
And average shape limb change velocity
Figure FDA0002892535700000018
And characterizing the reading attention of the user group in the shared space.
2. The system for detecting the reading attention of the user group in the shared space of the library in real time according to the claim 1, wherein the fringe projection amplifying unit comprises a laser light source (1), a grating generator (2) and an optical amplifying projector (3); the laser light source (1), the grating generator (2) and the light amplification projector (3) are sequentially arranged above the shared space from top to bottom, and the laser light source (1), the grating generator (2) and the light amplification projector (3) are located on the same optical axis.
3. The system for detecting the reading attention of the user group in the shared space based on the library in real time as claimed in claim 2, wherein the laser light source (1) is connected with the timing controller (4) and the computer processing system (6) in sequence through signal lines.
4. The system for detecting the reading attention of the user group in the shared space based on the library in real time as claimed in claim 2, wherein the image acquisition unit comprises a camera (5), the camera (5) is positioned beside the light amplification projector (3), and the camera (5) is connected with the computer processing system (6) through a signal line; the computer processing system (6) and the time sequence controller (4) synchronously time sequence and frame extraction stripe images.
5. A real-time detection system for reading attention of user groups based on library shared space according to any one of claims 1-4, characterized in that the computer processing system (6) processes the stripe modulation image to obtain the number N of motion areas and the average motion amplitude
Figure FDA0002892535700000014
And average shape limb change velocity
Figure FDA0002892535700000016
And the number N of the motion areas and the average motion amplitude value are respectively preset in a computer processing system (6)
Figure FDA0002892535700000015
And average shape limb change velocity
Figure FDA0002892535700000017
The threshold value of (2).
6. A method for detecting reading attention of a user group based on a library shared space in real time is characterized by comprising the following steps:
s1, collecting a user group stripe modulation image containing stripe light beams in a shared space by using a detection system, sequentially preprocessing two adjacent frames of stripe modulation images, performing mathematical difference operation on the two processed frames of stripe modulation images, and performing high-pass filtering on the operated images to form a primary result image;
s2, calculating the area, the number, the pixel duty ratio and the sampling frequency of the stripe regions in the preliminary result image to obtain the space-time distribution parameters related to the posture change of the user limbs in the image, wherein the space-time distribution parameters comprise the number N of the motion regions and the average motion amplitude value
Figure FDA0002892535700000021
And average shape limb change velocity
Figure FDA0002892535700000022
S3, setting the number N of motion areas and the average motion amplitude value in the computer processing system
Figure FDA0002892535700000023
And average shape limb change velocity
Figure FDA0002892535700000024
Will set the motion region N, the average motion amplitude
Figure FDA0002892535700000025
And average shape limb change velocity
Figure FDA0002892535700000026
And comparing the parameters with the judgment threshold respectively, wherein the comparison result of the parameters and the threshold represents the reading attention of the user group in the shared space.
7. The method as claimed in claim 6, wherein the preprocessing includes monochrome filtering and bivaluating striping, and the distribution functions of the two images after the bivaluating striping are respectively H1(x,y),H2(x,y)(ii) a And the difference value of the two images is processed to obtain a difference value function: Δ H(x,y)=H2(x,y)-H1(x,y)The area with the difference value of 1 represents the shape limb change area, and the area with the difference value of 0 represents the shape limb non-change area.
8. The method for detecting the reading attention of the user group in the shared space of the library in real time according to claim 7, wherein the method for obtaining the moving area N comprises the following steps: for Δ H(x,y)And (4) taking a continuous area with the function value of 1 as a contour line, and counting the continuous area to obtain the number N.
9. The method as claimed in claim 8, wherein the average motion amplitude is obtained by defining average motion amplitude as
Figure FDA0002892535700000027
Wherein, FiRepresenting the amplitude of the variation of the ith limb, MiDenoted as the ith pixel.
10. The method as claimed in claim 8, wherein the average shape change speed is
Figure FDA0002892535700000028
SiThe velocity of the limb change for the ith motion zone.
CN202110033487.3A 2021-01-11 2021-01-11 Real-time detection system and method for reading attention of user group based on library shared space Pending CN112784714A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110033487.3A CN112784714A (en) 2021-01-11 2021-01-11 Real-time detection system and method for reading attention of user group based on library shared space

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110033487.3A CN112784714A (en) 2021-01-11 2021-01-11 Real-time detection system and method for reading attention of user group based on library shared space

Publications (1)

Publication Number Publication Date
CN112784714A true CN112784714A (en) 2021-05-11

Family

ID=75756592

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110033487.3A Pending CN112784714A (en) 2021-01-11 2021-01-11 Real-time detection system and method for reading attention of user group based on library shared space

Country Status (1)

Country Link
CN (1) CN112784714A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799263A (en) * 2012-06-19 2012-11-28 深圳大学 Posture recognition method and posture recognition control system
CN103034846A (en) * 2012-12-12 2013-04-10 紫光股份有限公司 Face indentification device for live person
CN111640084A (en) * 2020-03-04 2020-09-08 湖北大学 High-speed pixel matching method based on LK optical flow
CN111986530A (en) * 2019-05-23 2020-11-24 深圳市希科普股份有限公司 Interactive learning system based on learning state detection
CN112097686A (en) * 2020-08-10 2020-12-18 安徽农业大学 Camouflage object detection method based on binary fringe projection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102799263A (en) * 2012-06-19 2012-11-28 深圳大学 Posture recognition method and posture recognition control system
CN103034846A (en) * 2012-12-12 2013-04-10 紫光股份有限公司 Face indentification device for live person
CN111986530A (en) * 2019-05-23 2020-11-24 深圳市希科普股份有限公司 Interactive learning system based on learning state detection
CN111640084A (en) * 2020-03-04 2020-09-08 湖北大学 High-speed pixel matching method based on LK optical flow
CN112097686A (en) * 2020-08-10 2020-12-18 安徽农业大学 Camouflage object detection method based on binary fringe projection

Similar Documents

Publication Publication Date Title
CN110889672B (en) Student card punching and class taking state detection system based on deep learning
Zhang et al. A high-resolution spontaneous 3d dynamic facial expression database
Essa et al. Modeling, tracking and interactive animation of faces and heads//using input from video
Wei et al. Real-time facial expression recognition for affective computing based on Kinect
CN106951870A (en) The notable event intelligent detecting prewarning method of monitor video that active vision notes
CN112597967A (en) Emotion recognition method and device for immersive virtual environment and multi-modal physiological signals
CN109528217A (en) A kind of mood detection and method for early warning based on physiological vibrations analysis
CN116825365B (en) Mental health analysis method based on multi-angle micro-expression
CN113486700A (en) Facial expression analysis method based on attention mechanism in teaching scene
CN112883867A (en) Student online learning evaluation method and system based on image emotion analysis
CN113064490B (en) Eye movement track-based virtual enhancement equipment identification method
CN113974627B (en) Emotion recognition method based on brain-computer generated confrontation
Murugappan et al. Virtual markers based facial emotion recognition using ELM and PNN classifiers
CN113239794A (en) Online learning oriented learning state automatic identification method
CN112784714A (en) Real-time detection system and method for reading attention of user group based on library shared space
Afriansyah et al. Facial expression classification for user experience testing using K-nearest neighbor
Cao et al. Facial Expression Study Based on 3D Facial Emotion Recognition
CN114067398A (en) Autistic child communication disorder assisting method based on facial expression recognition
Olawale et al. Individual Eye Gaze Prediction with the Effect of Image Enhancement Using Deep Neural Networks
CN114779925A (en) Sight line interaction method and device based on single target
Farrell et al. Real time detection and analysis of facial features to measure student engagement with learning objects
Kowalik et al. Broaference–a next generation multimedia terminal providing direct feedback on audience’s satisfaction level
CN109948445B (en) Action classification method and classification system under complex background
Nikolova et al. Artificial humans: An overview of photorealistic synthetic datasets and possible applications
Fischer Automatic facial expression analysis and emotional classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination