CN112084814B - Learning assisting method and intelligent device - Google Patents

Learning assisting method and intelligent device Download PDF

Info

Publication number
CN112084814B
CN112084814B CN201910508176.0A CN201910508176A CN112084814B CN 112084814 B CN112084814 B CN 112084814B CN 201910508176 A CN201910508176 A CN 201910508176A CN 112084814 B CN112084814 B CN 112084814B
Authority
CN
China
Prior art keywords
learning
type
scene
emotion
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910508176.0A
Other languages
Chinese (zh)
Other versions
CN112084814A (en
Inventor
杨昊民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201910508176.0A priority Critical patent/CN112084814B/en
Publication of CN112084814A publication Critical patent/CN112084814A/en
Application granted granted Critical
Publication of CN112084814B publication Critical patent/CN112084814B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a learning auxiliary method and an intelligent device, wherein the method comprises the following steps: the intelligent device comprises a camera, a bracket and a luminous piece; the bracket is provided with a base, and a sensor array is arranged at the base; the method comprises the following steps: acquiring point cloud data detected by a sensor array; triggering a camera to shoot and acquire image data when the point cloud data is judged to meet preset conditions; analyzing the image data to obtain scene types, topic types and emotion types; searching corresponding learning recommendation content according to the acquired historical learning score list, the topic type, the emotion type and the scene type, and recommending the learning recommendation content to the user. The invention recommends proper learning recommendation content for the user so as to ensure that the user finishes learning of corresponding learning content in proper environment and under proper learning emotion, thereby improving learning efficiency and achieving better learning effect.

Description

Learning assisting method and intelligent device
Technical Field
The invention relates to the technical field of intelligent teaching, in particular to a learning auxiliary method and an intelligent device.
Background
With the development of computer networks, the education field is also increasingly under the influence of the computer networks to develop on-line and remote teaching, and the teaching is moved from off-line class to on-line.
However, in the conventional scheme of recommending learning content to a user, the corresponding learning content is often recommended only according to the learning result of the user, but in daily learning life, different learning scenes and learning moods can be found to have a certain influence on the learning efficiency of each person. For example, when the user is bad in emotion, the learning content that requires a high degree of attention in learning complex mathematics, physics, and the like may be affected. When the device is in a quiet library, the device is not suitable for practice of learning content of spoken language. The existing learning content recommendation method does not have environmental adaptability and user emotion adaptability.
Disclosure of Invention
The invention aims to provide a learning auxiliary method and an intelligent device, which are used for recommending proper learning recommendation content for a user so as to ensure that the user can complete learning of corresponding learning content in proper environment and under proper learning emotion, thereby improving learning efficiency and achieving better learning effect.
The technical scheme provided by the invention is as follows:
the invention provides a learning auxiliary method, which comprises the following steps: the intelligent device comprises a camera, a bracket and a luminous piece; the bracket is provided with a base, and a sensor array is arranged at the base; the method comprises the following steps:
Acquiring point cloud data detected by the sensor array;
triggering a camera to shoot and acquire image data when the point cloud data is judged to meet a preset condition;
analyzing the image data to obtain scene types, topic types and emotion types;
searching corresponding learning recommendation contents according to the acquired historical learning score list, the topic type, the emotion type and the scene type, and recommending the learning recommendation contents to a user.
Further, the sensor array comprises a pressure sensor array; the step of acquiring the point cloud data detected by the sensor array specifically comprises the following steps:
acquiring pressure point cloud data obtained by detection of the pressure sensor array;
when the point cloud data is judged to meet the preset condition, triggering the camera to shoot and acquire image data specifically comprises the following steps:
according to the pressure point cloud data, calculating to obtain the contact area between the hand of the user and the base;
judging whether the contact area is kept unchanged in a preset time interval;
and triggering the camera to shoot and acquire the image data when the contact area is kept unchanged within a preset time interval.
Further, the sensor array comprises a thermosensitive infrared sensor array; the step of acquiring the point cloud data detected by the sensor array specifically comprises the following steps:
acquiring temperature point cloud data obtained by detection of the thermosensitive infrared sensor array;
when the point cloud data is judged to meet the preset condition, triggering the camera to shoot and acquire image data specifically comprises the following steps:
according to the temperature point cloud data, calculating to obtain the contact area between the hand of the user and the base;
judging whether the contact area is kept unchanged in a preset time interval;
and triggering the camera to shoot and acquire the image data when the contact area is kept unchanged within a preset time interval.
Further, the analyzing the image data to obtain the scene type, the topic type and the emotion type specifically includes the steps of:
performing image processing on the image data, and extracting scene characteristics, topic types and facial characteristics of a user;
inputting the scene characteristics into a scene classification model to obtain corresponding scene types;
and inputting the facial features into a emotion classification model to obtain corresponding emotion types.
Further, the searching for corresponding learning recommendation content according to the obtained historical learning score list, the topic type, the emotion type and the scene type, and recommending the learning recommendation content to the user specifically includes the steps of:
Acquiring a learning content set and a historical learning score list of a user; the learning content set comprises a plurality of learning contents and preset scene types corresponding to each learning content; the history learning score list comprises learning scores corresponding to each question type;
judging whether the topic type is matched with the emotion type or not;
when the topic type is not matched with the emotion type, searching a candidate learning content set with learning score matched with the emotion type and matched with the scene type according to the historical learning score list and the learning content set, and acquiring learning recommendation content matched with the emotion type from the candidate learning content set;
when the topic type is matched with the emotion type, judging whether the topic type is matched with the scene type or not;
and searching for learning recommendation content matched with the scene type when the topic type is not matched with the scene type.
The invention also provides an intelligent device, which comprises a camera, a bracket and a luminous piece; the bracket is provided with a base, and a sensor array is arranged at the base; further comprises: the system comprises an acquisition module, a control module, an analysis module and a processing module;
The acquisition module is used for acquiring the point cloud data detected by the sensor array;
the control module is connected with the acquisition module and the camera and is used for triggering the camera to shoot and acquire image data when the point cloud data are judged to meet the preset conditions;
the analysis module is connected with the control module and used for analyzing the image data to obtain scene types, topic types and emotion types;
the processing module is connected with the analysis module and used for searching corresponding learning recommendation content according to the acquired historical learning score list, the topic type, the emotion type and the scene type and recommending the learning recommendation content to a user.
Further, the sensor array comprises a pressure sensor array; the acquisition module comprises: a pressure point cloud acquisition unit; the control module includes: the device comprises a first calculation unit, a first judgment unit and a first control unit;
the pressure point cloud acquisition unit is used for acquiring pressure point cloud data detected by the pressure sensor array;
the first calculation unit is connected with the pressure point cloud acquisition unit and is used for calculating and obtaining the contact area between the hand of the user and the base according to the pressure point cloud data;
The first judging unit is connected with the first calculating unit and is used for judging whether the contact area is kept unchanged within a preset time interval;
the first control unit is connected with the first judging unit and is used for triggering the camera to shoot and acquire the image data when the contact area is kept unchanged within a preset time interval.
Further, the sensor array comprises a thermosensitive infrared sensor array; the acquisition module comprises: a temperature point cloud acquisition unit; the control module includes: the second computing unit, the second judging unit and the second control unit;
the temperature point cloud acquisition unit is used for acquiring temperature point cloud data detected by the thermosensitive infrared sensor array;
the second calculation unit is connected with the temperature point cloud acquisition unit and is used for calculating and obtaining the contact area between the hand of the user and the base according to the temperature point cloud data;
the second judging unit is connected with the second calculating unit and is used for judging whether the contact area is kept unchanged within a preset time interval;
the second control unit is connected with the second judging unit and is used for triggering the camera to shoot and acquire the image data when the contact area is kept unchanged within a preset time interval.
Further, the analysis module includes: the system comprises an extraction unit, a scene type classification unit and an emotion type classification unit;
the extraction unit is used for carrying out image processing on the image data and extracting scene characteristics, question types and facial characteristics of a user;
the scene type classification unit is connected with the extraction unit and is used for inputting the scene characteristics into a scene classification model to obtain corresponding scene types;
the emotion type classification unit is connected with the extraction unit and is used for inputting the facial features into the emotion classification model to obtain corresponding emotion types.
Further, the acquisition module is further used for acquiring a learning content set and a historical learning score list of the user; the learning content set comprises a plurality of learning contents and preset scene types corresponding to each learning content; the history learning score list comprises learning scores corresponding to each question type;
the processing module is connected with the acquisition module and is also used for judging whether the topic type is matched with the emotion type; when the topic type is not matched with the emotion type, searching a candidate learning content set with learning score matched with the emotion type and matched with the scene type according to the historical learning score list and the learning content set, and acquiring learning recommendation content matched with the emotion type from the candidate learning content set; when the topic type is matched with the emotion type, judging whether the topic type is matched with the scene type or not; and searching for learning recommendation content matched with the scene type when the topic type is not matched with the scene type.
According to the learning auxiliary method and the intelligent device, proper learning recommendation content can be recommended for the user, so that the user can complete learning of corresponding learning content in a proper environment and under proper learning emotion, learning efficiency is improved, and better learning effect is achieved.
Drawings
The above features, technical features, advantages and implementation manners of a learning aid method and intelligent device will be further described in a clear and understandable manner by describing preferred embodiments with reference to the accompanying drawings.
FIG. 1 is a flow chart of one embodiment of a learning support method of the present invention;
FIG. 2 is a flow chart of another embodiment of a learning support method of the present invention;
FIG. 3 is a flow chart of another embodiment of a learning support method of the present invention;
FIG. 4 is a flow chart of another embodiment of a learning support method of the present invention;
FIG. 5 is a flow chart of another embodiment of a learning support method of the present invention;
fig. 6 is a schematic structural diagram of an embodiment of a smart device according to the present invention.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will explain the specific embodiments of the present invention with reference to the accompanying drawings. It is evident that the drawings in the following description are only examples of the invention, from which other drawings and other embodiments can be obtained by a person skilled in the art without inventive effort.
For the sake of simplicity of the drawing, the parts relevant to the present invention are shown only schematically in the figures, which do not represent the actual structure thereof as a product. Additionally, in order to simplify the drawing for ease of understanding, components having the same structure or function in some of the drawings are shown schematically with only one of them, or only one of them is labeled. Herein, "a" means not only "only this one" but also "more than one" case.
In one embodiment of the present invention, as shown in fig. 1, a learning assistance method includes: the intelligent device comprises a camera, a bracket and a luminous piece; the bracket is provided with a base, and a sensor array is arranged at the base; the method comprises the following steps:
s100, acquiring point cloud data detected by a sensor array;
s200, triggering a camera to shoot and acquire image data when the point cloud data is judged to meet preset conditions;
s300, analyzing the image data to obtain scene types, topic types and emotion types;
s400, searching corresponding learning recommendation contents according to the acquired historical learning score list, the topic type, the emotion type and the scene type, and recommending the learning recommendation contents to the user.
Specifically, the intelligent device is a device that is provided with a camera, a bracket, a luminous piece and a base for an intelligent desk lamp, an intelligent desk and the like, and the base is provided with a sensor array, wherein the sensor array is a combination of a plurality of sensor elements and is generally in special geometric distribution, so that detection data of each sensor element can be obtained more comprehensively, and a large amount of detection data form point cloud data. The writing object includes a book, or an area where the tablet is capable of writing, or the like.
After the intelligent device obtains the point cloud data, analyzing the point cloud data, and judging whether the point cloud data meets preset conditions, for example, if the preset conditions are that whether corresponding detection data in the point cloud data are larger than a default value, and if the detection data are larger than the default value, judging that the point cloud data meet the preset conditions. And for example, when the preset condition is that whether the corresponding detection data in the point cloud data change or not is judged, and once the detection data change, the point cloud data is judged to accord with the preset condition. When the intelligent device judges that the cloud data accords with the preset condition, an instruction is sent to a camera of the intelligent device, and the camera starts working according to the instruction, so that the lens of the camera is adjusted to face a user, a writing object and an environment where the user is located, and image data corresponding to the writing content, the face of the user and the environment where the user is located are acquired. Preferably, in order to reduce the workload of the smart device, the first image data including the written content, the second image data including the face of the user, and the third image data including the environment in which the user is located are photographed and acquired, respectively.
The intelligent device performs image processing on the image data, and identifies all information in the image data to obtain scene types, topic types and emotion types. The intelligent device searches the learning recommendation content corresponding to the type on the local or network, and then the intelligent device recommends the searched learning recommendation content to the user, so that the user learns according to the learning recommendation content.
Preferably, the camera in the intelligent device comprises a lens, a main control chip and an artificial intelligent chip, wherein the main control chip is connected with the lens and the artificial intelligent chip, the lens can rotate within a preset angle range, such as 120 degrees, and meanwhile, the main control chip recognizes the characterization information, and then adjusts the angle shooting of the lens to obtain image data, the characterization information is directivity information which can be used for characterizing the characteristics of objects, such as faces, objects (e.g. a bookshelf in a library, a blackboard table chair in a classroom, a bed in a bedroom and the like) and writing objects, so that the camera is controlled to intelligently grasp image data, interference of other external factors is reduced, and therefore the operation pressure of the intelligent device at the rear end can be reduced to a greater extent, and the processing efficiency of the whole flow is improved.
Through the embodiment, the learning recommendation content corresponding to the emotion type and the scene type can be intelligently recommended to the user through the intelligent device according to the historical learning score list, the purpose of intelligent tutoring can be achieved, the proper learning content can be recommended to the user, the user can complete learning of the corresponding knowledge point content under the proper environment and the proper emotion, the learning efficiency is improved, and the better learning effect is achieved.
In one embodiment of the present invention, as shown in fig. 2, a learning assistance method includes: the intelligent device comprises a camera, a bracket and a luminous piece; the support is provided with a base, and the sensor array comprises a pressure sensor array; the method comprises the following steps:
s110, acquiring pressure point cloud data obtained by detection of a pressure sensor array;
s210, according to the pressure point cloud data, calculating to obtain the contact area between the hand of the user and the base;
s220, triggering a camera to shoot and acquire image data when the contact area is kept unchanged within a preset time interval;
s300, analyzing the image data to obtain scene types, topic types and emotion types;
s400, searching corresponding learning recommendation contents according to the acquired historical learning score list, the topic type, the emotion type and the scene type, and recommending the learning recommendation contents to the user.
Specifically, when a user presses the base, each pressure sensor in the pressure sensor array arranged at the base is affected by the pressing force of the user, so that the pressure sensors obtain pressure values obtained by respective detection, and a plurality of pressure sensors detect a large number of pressure values at the same time to obtain pressure point cloud data. Then, the intelligent device calculates according to the pressure point cloud data to obtain the contact area between the body of the user and the base, and also needs to judge whether the contact area is kept unchanged in a preset time interval, if the contact area is kept unchanged in the preset time interval, the user is indicated to have the intention of learning under the intelligent device, then the intelligent device sends an instruction to the camera of the intelligent device, so that the camera starts working according to the instruction, the lens of the camera is adjusted to face of the user, the writing object and the environment where the user is located, and corresponding image data is obtained.
Through this embodiment, can trigger the camera automatically and work, need not the camera constantly keep shooting state to reduce the electric quantity consumption, resources are saved.
In one embodiment of the present invention, as shown in fig. 3, a learning assistance method includes: the intelligent device comprises a camera, a bracket and a luminous piece; the bracket is provided with a base, and the sensor array comprises a thermosensitive infrared sensor array; the method comprises the following steps:
s120, acquiring temperature point cloud data obtained by detection of a thermosensitive infrared sensor array;
s230, according to the temperature point cloud data, calculating to obtain the contact area between the hand of the user and the base;
s240, triggering the camera to shoot and acquire image data when the contact area is kept unchanged in a preset time interval;
s300, analyzing the image data to obtain scene types, topic types and emotion types;
s400, searching corresponding learning recommendation contents according to the acquired historical learning score list, the topic type, the emotion type and the scene type, and recommending the learning recommendation contents to the user.
Specifically, the thermal infrared sensor is a sensor that performs measurement using physical properties of infrared rays. Any substance can radiate infrared rays as long as it is itself above absolute zero. The thermosensitive infrared sensor comprises a thermistor, the temperature of the thermistor rises when the thermistor is irradiated by infrared rays, the resistance changes, and the thermistor becomes an electric signal to be output through a conversion circuit, so that a corresponding temperature value is obtained through detection.
When a user presses the base, each thermosensitive infrared in the thermosensitive infrared sensor array arranged at the base is influenced by the pressing force of the user, so that the thermosensitive infrared obtains the temperature value obtained by respective detection, and a plurality of thermosensitive infrared obtain a large number of temperature values obtained by detection at the same time to obtain temperature point cloud data. Then, the intelligent device calculates according to the temperature point cloud data to obtain the contact area between the body of the user and the base, and also needs to judge whether the contact area is kept unchanged in a preset time interval, if the contact area is kept unchanged in the preset time interval, the user is indicated to have the intention of learning under the intelligent device, then the intelligent device sends an instruction to the camera of the intelligent device, so that the camera starts working according to the instruction, the lens of the camera is adjusted to face of the user, the writing object and the environment where the user is located, and corresponding image data is obtained.
Through this embodiment, can trigger the camera automatically and work, need not the camera constantly keep shooting state to reduce the electric quantity consumption, resources are saved.
Preferably, based on the above embodiment, after calculating the contact area between the hand of the user and the base, before judging whether the contact area remains unchanged in the preset time interval, the method further includes the steps of:
Comparing the contact area with a preset area range to match the identity type of the user;
when the identity type is teenager, the current time is obtained, and if the current time is within the preset learning time period, the intelligent device prohibits the entertainment content from being displayed for teenager, and only the learning content is displayed.
When the identity type is adult, entertainment content or learning content is displayed to teenagers when the identity type is adult.
Illustratively, when the identity type of the user is a child, the child uses a finger (or palm) to press the pressure sensor array, a child finger area range (or child palm area range) is predefined, and the child finger area range (or child palm area range) is obtained by largely measuring the child finger (or palm) area size and taking an average value thereof. When the finger (or palm) of the child presses the base, the pressure sensor array arranged at the base can measure the pressure of the user pressed down at each environment, the finger area (or palm area) of the child can be measured according to the pressure point cloud data, and when the finger (or palm) of the child leaves the base, the pressure value detected by the pressure sensor array arranged at the base is restored to a default value. After the area of the finger (or palm) of the child is measured, when the area of the finger (or palm) of the child is in the area range of the finger (or the area range of the palm) of the child, the user can be roughly judged to be the child, so that the use permission of the child to the intelligent device can be limited, and under the condition that the identity type is ensured to be the teenager, the teenager can learn effectively in the learning time period, and the learning score of the teenager is improved.
In one embodiment of the present invention, as shown in fig. 4, a learning assistance method includes: the intelligent device comprises a camera, a bracket and a luminous piece; the bracket is provided with a base, and a sensor array is arranged at the base; the method comprises the following steps:
s100, acquiring point cloud data detected by a sensor array;
s200, triggering a camera to shoot and acquire image data when the point cloud data is judged to meet preset conditions;
s310, performing image processing on the image data, and extracting scene characteristics, topic types and facial characteristics of a user;
s320, inputting scene characteristics into a scene classification model to obtain corresponding scene types;
s330, inputting the facial features into the emotion classification model to obtain corresponding emotion types;
s400, searching corresponding learning recommendation contents according to the acquired historical learning score list, the topic type, the emotion type and the scene type, and recommending the learning recommendation contents to the user.
Specifically, when the camera shoots one image data, the camera sends the image data to the processor, and after receiving the image data, the processor carries out gray processing on the acquired image data, equalizes a gray image histogram and reduces the information content in the image so as to increase the detection speed.
When the image data includes both writing, the user's face and the user's environment, a recognition algorithm is used to extract facial images, topic images and scene images from the image data after image processing (i.e., gray-scale and histogram equalization). Specifically, the recognition algorithm for extracting the face image, the topic image, and the scene image from the image data includes: based on geometric feature recognition extraction, local feature analysis extraction, neural network extraction, etc. After the face image is extracted, the facial features of the user are identified according to the extracted face image, and then the facial features are input into the emotion classification model to obtain the corresponding emotion type. Facial features include: the method comprises the steps of providing environmental characteristic points of the outer contours of eyes, eyebrows, noses, mouths and faces, wherein the environmental characteristic points of the eyes comprise environmental characteristic points of eyeballs. The topic type (question type and simple topic type, and question type includes subjects, questions, knowledge points, and the like that are not good at, and simple topic type includes subjects, questions, knowledge points, and the like that are good at) is identified from the extracted topic image. Scene features (tables and chairs, blackboards, beds, bookshelf and the like) are identified according to the extracted scene images, and then the scene features are input into a scene classification model to obtain corresponding scene types (including classrooms, study rooms, libraries, playgrounds and the like). Similarly, when any image data includes a plurality of contents, for example, the image data includes both the face of the user and the environment in which the user is located, a recognition algorithm is used to extract the face image and the scene image from the image data after the image processing (i.e., gray-scale and histogram equalization).
When the first image data comprises writing content, the second image data comprises a user face, and the third image data comprises an environment where the user is located, facial features of the user are recognized according to the extracted facial images, and then the facial features are input into a emotion classification model to obtain corresponding emotion types. Facial features include: the method comprises the steps of providing environmental characteristic points of the outer contours of eyes, eyebrows, noses, mouths and faces, wherein the environmental characteristic points of the eyes comprise environmental characteristic points of eyeballs. The topic type (question type and simple topic type, and question type includes subjects, questions, knowledge points, and the like that are not good at, and simple topic type includes subjects, questions, knowledge points, and the like that are good at) is identified from the extracted topic image. Scene features (tables and chairs, blackboards, beds, bookshelf and the like) are identified according to the extracted scene images, and then the scene features are input into a scene classification model to obtain corresponding scene types (including classrooms, study rooms, libraries, playgrounds and the like).
The method for training to obtain the emotion classification model comprises the following steps: establishing a sample library with X face images, manually marking u face features in each face image to obtain a face feature training set and a face feature verification set, manually marking according to the face features to obtain an emotion type training set, and carrying out emotion type verification on the face feature verification set, wherein the emotion type training set or the emotion type verification set comprises a difficulty level-1, a happiness level-2, a depression level-3, a anxiety level-4, an aversion level-5, a happiness level-6, a happiness level-7 and the like. And taking the facial feature training set as input, taking the emotion type training set as output, and training the initial recognition model for multiple times to obtain an emotion classification model.
The method for training to obtain the scene classification model comprises the following steps: establishing a sample library with X scene images, manually marking u scene features in each scene image to obtain a scene feature training set and a scene feature verification set, manually marking according to the scene features to obtain a scene type training set and a scene type verification set corresponding to the scene feature verification set, wherein the scene type training set or the scene type verification set comprises classrooms-1, study rooms-2, libraries-3, playgrounds-4 and the like. And taking the scene characteristic training set as input, taking the scene type training set as output, and training the initial recognition model for multiple times to obtain a scene classification model.
In this embodiment, the trained emotion classification model is tested by using the facial feature verification set and the emotion type verification set corresponding to the facial feature verification set, and the best recognition effect can be the highest classification accuracy, the highest convergence speed, and the like by selecting one recognition model with the best recognition effect from the plurality of trained emotion classification models as the final emotion classification model. Similarly, a recognition model with the best recognition effect can be obtained as a final scene classification model. And continuously updating the training emotion classification model and the scene classification model in the subsequent use process, so that the accuracy of emotion judgment and scene judgment is improved.
According to the embodiment, the emotion type corresponding to the current emotion of the user, the scene type corresponding to the scene where the user is currently located and the question type corresponding to the learning content which the user is currently doing are automatically identified, so that learning recommendation content which is matched with the emotion type and the scene type corresponding to is intelligently recommended to the user according to the historical learning score list, the purpose of intelligent coaching can be achieved, proper learning content can be recommended to the user, the user can learn corresponding knowledge point content under proper environment and proper emotion, learning efficiency is improved, and better learning effect is achieved.
In one embodiment of the present invention, as shown in fig. 5, a learning assistance method includes: the intelligent device comprises a camera, a bracket and a luminous piece; the bracket is provided with a base, and a sensor array is arranged at the base; the method comprises the following steps:
s100, acquiring point cloud data detected by a sensor array;
s200, triggering a camera to shoot and acquire image data when the point cloud data is judged to meet preset conditions;
s300, analyzing the image data to obtain scene types, topic types and emotion types;
s410, acquiring a learning content set and a historical learning score list of a user; the learning content set comprises a plurality of learning contents and preset scene types corresponding to each learning content; the historical learning score list comprises learning scores corresponding to each question type;
S420, judging whether the topic type is matched with the emotion type;
s430, when the topic type is not matched with the emotion type, searching a candidate learning content set which is matched with the emotion type and the learning score according to the historical learning score list and the learning content set, and acquiring learning recommendation content which is matched with the emotion type from the candidate learning content set;
s440, judging whether the topic type is matched with the scene type or not when the topic type is matched with the emotion type;
s450, searching for learning recommendation content matched with the scene type when the topic type is not matched with the scene type.
Specifically, a learning content set is obtained in advance by manually identifying the scene type of each learning content adapted to learning. For example, the scene type corresponding to the reading learning content is a study room, a playground or a classroom, and the scene type corresponding to the reading learning content is a study room, a library, a playground or a classroom. The establishment of the learning content set is realized according to the mode.
The user learns the image data obtained by shooting under the intelligent device, and analyzes the image data to obtain the current learning emotion of the user, namely the emotion type, the topic type corresponding to the current learning content of the user and the scene type of the current environment of the user. It is determined whether the question type matches the emotion type, i.e., whether the question type (question type not good for the user) corresponds to a positive emotion type (including positive emotion types such as happy, etc.), and the simple question type (question type good for the user) corresponds to a negative emotion type (including negative emotion types such as happy, angry, frustrated, anxiety, aversion, etc.).
When the topic type does not match the emotion type, for example, when the question type corresponds to a negative emotion type, it may be determined that the user is in a negative attitude to learning, at this time, a first candidate learning content set matching the scene type (i.e., a candidate learning content set that the user is good at and matches the scene type) is generated in combination with the history score list, and then the first learning recommendation content matching the negative emotion type is searched for from the generated first candidate learning content set. Therefore, the user is in the negative emotion type, does not have the interest of overcoming or exercising the unqualified learning content, the learning efficiency of the user is reduced due to the fact that the learning emotion is negative, the intelligent device recommends the first learning recommendation content to the user, the user can keep the learning state, and meanwhile, the learning confidence of the user can be improved due to the fact that the user learns the unqualified learning content, so that the negative emotion type is adjusted to be the positive emotion type, the learning content with high reinforcement score rate is consolidated, the score rate is further improved, and the learning effect and the learning score effect of the user are improved.
When the topic type does not match the emotion type, for example, when the simple topic type corresponds to the positive emotion type, it may be determined that the user is in a positive attitude to learning, a second candidate learning content set matching the scene type (i.e., a candidate learning content set that the user is not good at and matches the scene type) is generated in conjunction with the history score list, and then second learning recommendation content matching the positive emotion type is searched for from the generated second candidate learning content set. Therefore, the user is in the positive emotion type, the user can have the interest of overcoming or training the unqualified learning content, the learning efficiency of the user can be improved due to the positive learning emotion, the intelligent device recommends the second learning recommendation content to the user, the user can strengthen the knowledge points and the questions with low training score rate in the state of being in the positive learning emotion, and the learning effect and the learning score of the user are improved.
When the topic type is matched with the emotion type, for example, if the simple topic type corresponds to a negative emotion type, judging whether the topic type is matched with the scene type, if the problematic topic type corresponds to a positive emotion type, further judging whether the topic type is matched with the scene type, and if the topic type is matched with the scene type, the user continues to learn by using learning content displayed by the writing object. And when the topic type is not matched with the scene type, searching for third learning recommended content matched with the scene type.
According to the method and the device for learning, whether the user is matched with the question type at the current writing object and whether the current emotion type of the user is matched with the scene type of the environment where the user is located or not is judged, corresponding learning recommendation content is generated according to the judging result and the historical learning scene list, on one hand, the situation that the user falls into the negative attitude to learning and is in the emotion of learning is avoided, learning efficiency is improved, and therefore learning score of the user is effectively adjusted. On the other hand, the learning of the user can be pertinently assisted, and the learning interest of the user is improved. Finally, recommending proper learning recommendation contents for the user by combining the scene type, the topic type, the emotion type and the historical learning scene list of the current environment of the user, so that the user can complete learning of the corresponding learning contents in the proper environment and under the proper learning emotion, the learning efficiency is improved, and a better learning effect is achieved.
In one embodiment of the present invention, as shown in fig. 6, an intelligent device 1 includes a camera 15, a bracket, and a light emitting member; the bracket is provided with a base, and a sensor array is arranged at the base; further comprises: an acquisition module 11, a control module 12, an analysis module 13 and a processing module 14;
an acquisition module 11, configured to acquire point cloud data detected by the sensor array;
the control module 12 is connected with the acquisition module 11 and the camera 15 and is used for triggering the camera 15 to shoot and acquire image data when the point cloud data is judged to be in accordance with the preset condition;
the analysis module 13 is connected with the control module 12 and is used for analyzing the image data to obtain scene types, topic types and emotion types;
the processing module 14 is connected with the analysis module 13 and is used for searching corresponding learning recommendation contents according to the acquired historical learning score list, the acquired topic type, the acquired emotion type and the acquired scene type and recommending the learning recommendation contents to the user.
Specifically, the intelligent device 1 is a device which is provided with a camera, a bracket, a luminous piece and a base for an intelligent desk lamp, an intelligent desk and the like, and a sensor array is arranged at the base, wherein the sensor array is a combination of a plurality of sensor elements and is generally in special geometric distribution, so that detection data of each sensor element can be obtained more comprehensively, and a large amount of detection data form point cloud data. The writing object includes a book, or an area where the tablet is capable of writing, or the like.
After obtaining the point cloud data, the intelligent device 1 analyzes the point cloud data to determine whether the point cloud data meets a preset condition, for example, when the preset condition is that whether each corresponding detection data in the point cloud data is greater than a default value, and once the detection data is greater than the default value, the point cloud data is determined to meet the preset condition. And for example, when the preset condition is that whether the corresponding detection data in the point cloud data change or not is judged, and once the detection data change, the point cloud data is judged to accord with the preset condition. When the intelligent device 1 judges that the preset conditions are met according to the point cloud data, an instruction is sent to the camera 15 of the intelligent device 1, and the camera 15 starts working according to the instruction, so that the lens of the camera 15 is adjusted to face of a user, a writing object and the environment where the user is located, and image data corresponding to the face of the user and the environment where the user is located are obtained. Preferably, in order to reduce the workload of the smart device 1, the first image data including the written content, the second image data including the face of the user, and the third image data including the environment in which the user is located are photographed and acquired, respectively.
The intelligent device 1 performs image processing on the image data, and recognizes all information in the image data to obtain a scene type, a topic type, and an emotion type. The intelligent device 1 searches for the learning recommendation content corresponding to the type described above locally or on the network, and then the intelligent device 1 recommends the learning recommendation content obtained by the search to the user so that the user learns according to the learning recommendation content.
Preferably, the camera 15 in the intelligent device 1 comprises a lens, a main control chip and an artificial intelligent chip, the main control chip is connected with the lens and the artificial intelligent chip, the lens can rotate within a preset angle range, such as 120 degrees, and meanwhile, the main control chip recognizes the characterization information, and further adjusts the angle shooting of the lens to obtain image data, the characterization information is directivity information which can be used for characterizing the characteristics of objects, such as faces, objects (e.g. a bookshelf in a library, a blackboard table in a classroom, a bed in a bedroom, etc.) and writing objects, so that the camera 15 is controlled to intelligently grasp image data, interference of other external factors is reduced, and therefore the operation pressure of the intelligent device 1 at the rear end can be reduced to a greater extent, and the processing efficiency of the whole flow is improved.
Through this embodiment, can be through intelligent device 1 according to the study recommendation content that history study score list intelligence was matchd in emotion type and scene type correspondence to the user, not only can reach intelligent tutoring's purpose, can also recommend suitable study content for the user to guarantee that the user accomplishes the study of corresponding knowledge point content under suitable environment and suitable emotion, improve learning efficiency and reach better study effect.
Based on the above embodiments, the sensor array comprises a pressure sensor array; the acquisition module 11 includes: a pressure point cloud acquisition unit; the control module 12 includes: the device comprises a first calculation unit, a first judgment unit and a first control unit;
the pressure point cloud acquisition unit is used for acquiring pressure point cloud data obtained by detection of the pressure sensor array;
the first calculation unit is connected with the pressure point cloud acquisition unit and is used for calculating and obtaining the contact area between the hand of the user and the base according to the pressure point cloud data;
the first judging unit is connected with the first calculating unit and is used for judging whether the contact area is kept unchanged in a preset time interval;
the first control unit is connected with the first judging unit and is used for triggering the camera 15 to shoot and acquire image data when the contact area is kept unchanged within a preset time interval.
Specifically, when a user presses the base, each pressure sensor in the pressure sensor array arranged at the base is affected by the pressing force of the user, so that the pressure sensors obtain pressure values obtained by respective detection, and a plurality of pressure sensors detect a large number of pressure values at the same time to obtain pressure point cloud data. Then, the intelligent device 1 calculates according to the pressure point cloud data to obtain the contact area between the body of the user and the base, and also needs to determine whether the contact area remains unchanged in a preset time interval, if so, the user is indicated to have an intention generation instruction for learning under the intelligent device 1, so that the camera 15 starts working according to the instruction, and the lens of the camera 15 is adjusted to face of the user, the writing object and the environment where the user is located, thereby obtaining corresponding image data.
Through this embodiment, can trigger the camera 15 automatically and work, do not need the camera 15 to keep shooting state constantly to reduce the electric quantity consumption, resources are saved.
Based on the above embodiments, the sensor array includes a thermosensitive infrared sensor array; the acquisition module 11 includes: a temperature point cloud acquisition unit; the control module 12 includes: the second computing unit, the second judging unit and the second control unit;
the temperature point cloud acquisition unit is used for acquiring temperature point cloud data detected by the thermosensitive infrared sensor array;
the second calculation unit is connected with the temperature point cloud acquisition unit and is used for calculating and obtaining the contact area between the hand of the user and the base according to the temperature point cloud data;
the second judging unit is connected with the second calculating unit and is used for judging whether the contact area is kept unchanged in a preset time interval;
and the second control unit is connected with the second judging unit and is used for triggering the camera 15 to shoot and acquire image data when the contact area is kept unchanged within a preset time interval.
Specifically, the thermal infrared sensor is a sensor that performs measurement using physical properties of infrared rays. Any substance can radiate infrared rays as long as it is itself above absolute zero. The thermosensitive infrared sensor comprises a thermistor, the temperature of the thermistor rises when the thermistor is irradiated by infrared rays, the resistance changes, and the thermistor becomes an electric signal to be output through a conversion circuit, so that a corresponding temperature value is obtained through detection.
When a user presses the base, each thermosensitive infrared in the thermosensitive infrared sensor array arranged at the base is influenced by the pressing force of the user, so that the thermosensitive infrared obtains the temperature value obtained by respective detection, and a plurality of thermosensitive infrared obtain a large number of temperature values obtained by detection at the same time to obtain temperature point cloud data. Then, the intelligent device 1 calculates according to the temperature point cloud data to obtain the contact area between the body of the user and the base, and also needs to determine whether the contact area remains unchanged in a preset time interval, if so, the user is indicated to have an intention generation instruction for learning under the intelligent device 1, so that the camera 15 starts working according to the instruction, and the lens of the camera 15 is adjusted to face of the user, the writing object and the environment where the user is located, thereby obtaining corresponding image data.
Through this embodiment, can trigger the camera 15 automatically and work, do not need the camera 15 to keep shooting state constantly to reduce the electric quantity consumption, resources are saved.
Based on the above embodiment, the analysis module 13 includes: the system comprises an extraction unit, a scene type classification unit and an emotion type classification unit;
The extraction unit is used for carrying out image processing on the image data and extracting scene characteristics, topic types and facial characteristics of a user;
the scene type classification unit is connected with the extraction unit and is used for inputting scene characteristics into the scene classification model to obtain corresponding scene types;
and the emotion type classification unit is connected with the extraction unit and is used for inputting the facial features into the emotion classification model to obtain the corresponding emotion type.
Specifically, when the camera 15 shoots an image data, the camera 15 sends the image data to the processor, and after receiving the image data, the processor performs gray processing on the acquired image data, equalizes a gray image histogram, and reduces the information content in the image so as to increase the detection speed.
When the image data includes both writing, the user's face and the user's environment, a recognition algorithm is used to extract facial images, topic images and scene images from the image data after image processing (i.e., gray-scale and histogram equalization). Specifically, the recognition algorithm for extracting the face image, the topic image, and the scene image from the image data includes: based on geometric feature recognition extraction, local feature analysis extraction, neural network extraction, etc. After the face image is extracted, the facial features of the user are identified according to the extracted face image, and then the facial features are input into the emotion classification model to obtain the corresponding emotion type. Facial features include: the method comprises the steps of providing environmental characteristic points of the outer contours of eyes, eyebrows, noses, mouths and faces, wherein the environmental characteristic points of the eyes comprise environmental characteristic points of eyeballs. The topic type (question type and simple topic type, and question type includes subjects, questions, knowledge points, and the like that are not good at, and simple topic type includes subjects, questions, knowledge points, and the like that are good at) is identified from the extracted topic image. Scene features (tables and chairs, blackboards, beds, bookshelf and the like) are identified according to the extracted scene images, and then the scene features are input into a scene classification model to obtain corresponding scene types (including classrooms, study rooms, libraries, playgrounds and the like). Similarly, when any image data includes a plurality of contents, for example, the image data includes both the face of the user and the environment in which the user is located, a recognition algorithm is used to extract the face image and the scene image from the image data after the image processing (i.e., gray-scale and histogram equalization).
When the first image data comprises writing content, the second image data comprises a user face, and the third image data comprises an environment where the user is located, facial features of the user are recognized according to the extracted facial images, and then the facial features are input into a emotion classification model to obtain corresponding emotion types. Facial features include: the method comprises the steps of providing environmental characteristic points of the outer contours of eyes, eyebrows, noses, mouths and faces, wherein the environmental characteristic points of the eyes comprise environmental characteristic points of eyeballs. The topic type (question type and simple topic type, and question type includes subjects, questions, knowledge points, and the like that are not good at, and simple topic type includes subjects, questions, knowledge points, and the like that are good at) is identified from the extracted topic image. Scene features (tables and chairs, blackboards, beds, bookshelf and the like) are identified according to the extracted scene images, and then the scene features are input into a scene classification model to obtain corresponding scene types (including classrooms, study rooms, libraries, playgrounds and the like).
The method for training to obtain the emotion classification model comprises the following steps: establishing a sample library with X face images, manually marking u face features in each face image to obtain a face feature training set and a face feature verification set, manually marking according to the face features to obtain an emotion type training set, and carrying out emotion type verification on the face feature verification set, wherein the emotion type training set or the emotion type verification set comprises a difficulty level-1, a happiness level-2, a depression level-3, a anxiety level-4, an aversion level-5, a happiness level-6, a happiness level-7 and the like. And taking the facial feature training set as input, taking the emotion type training set as output, and training the initial recognition model for multiple times to obtain an emotion classification model.
The method for training to obtain the scene classification model comprises the following steps: establishing a sample library with X scene images, manually marking u scene features in each scene image to obtain a scene feature training set and a scene feature verification set, manually marking according to the scene features to obtain a scene type training set and a scene type verification set corresponding to the scene feature verification set, wherein the scene type training set or the scene type verification set comprises classrooms-1, study rooms-2, libraries-3, playgrounds-4 and the like. And taking the scene characteristic training set as input, taking the scene type training set as output, and training the initial recognition model for multiple times to obtain a scene classification model.
In this embodiment, the trained emotion classification model is tested by using the facial feature verification set and the emotion type verification set corresponding to the facial feature verification set, and the best recognition effect can be the highest classification accuracy, the highest convergence speed, and the like by selecting one recognition model with the best recognition effect from the plurality of trained emotion classification models as the final emotion classification model. Similarly, a recognition model with the best recognition effect can be obtained as a final scene classification model. And continuously updating the training emotion classification model and the scene classification model in the subsequent use process, so that the accuracy of emotion judgment and scene judgment is improved.
According to the embodiment, the emotion type corresponding to the current emotion of the user, the scene type corresponding to the scene where the user is currently located and the question type corresponding to the learning content which the user is currently doing are automatically identified, so that learning recommendation content which is matched with the emotion type and the scene type corresponding to is intelligently recommended to the user according to the historical learning score list, the purpose of intelligent coaching can be achieved, proper learning content can be recommended to the user, the user can learn corresponding knowledge point content under proper environment and proper emotion, learning efficiency is improved, and better learning effect is achieved.
Based on the above embodiment, the obtaining module 11 is further configured to obtain a learning content set and a historical learning score list of the user; the learning content set comprises a plurality of learning contents and preset scene types corresponding to each learning content; the historical learning score list comprises learning scores corresponding to each question type;
the processing module 14 is connected with the acquisition module 11 and is also used for judging whether the topic type is matched with the emotion type; when the topic type is not matched with the emotion type, searching a candidate learning content set which is matched with the emotion type and the learning score according to the historical learning score list and the learning content set, and acquiring learning recommendation content which is matched with the emotion type from the candidate learning content set; when the topic type is matched with the emotion type, judging whether the topic type is matched with the scene type or not; and searching for learning recommendation content matched with the scene type when the topic type is not matched with the scene type.
Specifically, a learning content set is obtained in advance by manually identifying the scene type of each learning content adapted to learning. For example, the scene type corresponding to the reading learning content is a study room, a playground or a classroom, and the scene type corresponding to the reading learning content is a study room, a library, a playground or a classroom. The establishment of the learning content set is realized according to the mode.
The user learns the image data obtained by shooting under the intelligent device 1, and analyzes the image data to obtain the current learning emotion of the user, namely the emotion type, the topic type corresponding to the current learning content of the user and the scene type of the current environment of the user. It is determined whether the question type matches the emotion type, i.e., whether the question type (question type not good for the user) corresponds to a positive emotion type (including positive emotion types such as happy, etc.), and the simple question type (question type good for the user) corresponds to a negative emotion type (including negative emotion types such as happy, angry, frustrated, anxiety, aversion, etc.).
When the topic type does not match the emotion type, for example, when the question type corresponds to a negative emotion type, it may be determined that the user is in a negative attitude to learning, at this time, a first candidate learning content set matching the scene type (i.e., a candidate learning content set that the user is good at and matches the scene type) is generated in combination with the history score list, and then the first learning recommendation content matching the negative emotion type is searched for from the generated first candidate learning content set. Thus, because the user is in the negative emotion type, the user does not have the interest of overcoming or exercising the unqualified learning content, the learning efficiency of the user is reduced due to the negative learning emotion, and the intelligent device 1 recommends the first learning recommendation content to the user, so that the user can keep the learning state, and meanwhile, the learning confidence of the user is improved due to the fact that the user learns the unqualified learning content, thereby adjusting the negative emotion type to be the positive emotion type, consolidating the learning content with high reinforcement score rate, further improving the score rate, and achieving the effects of improving the learning effect and the learning score of the user.
When the topic type does not match the emotion type, for example, when the simple topic type corresponds to the positive emotion type, it may be determined that the user is in a positive attitude to learning, a second candidate learning content set matching the scene type (i.e., a candidate learning content set that the user is not good at and matches the scene type) is generated in conjunction with the history score list, and then second learning recommendation content matching the positive emotion type is searched for from the generated second candidate learning content set. Thus, since the user is in the positive emotion type, the user can have the interest of overcoming or exercising the unqualified learning content, and since the learning emotion is positive, the learning efficiency of the user can be improved, and the intelligent device 1 recommends the second learning recommendation content to the user, the user can strengthen the knowledge points and the questions with low training score rate in the state of being in the positive learning emotion, and further the learning effect and the learning score of the user are improved.
When the topic type is matched with the emotion type, for example, if the simple topic type corresponds to a negative emotion type, judging whether the topic type is matched with the scene type, if the problematic topic type corresponds to a positive emotion type, further judging whether the topic type is matched with the scene type, and if the topic type is matched with the scene type, the user continues to learn by using learning content displayed by the writing object. And when the topic type is not matched with the scene type, searching for third learning recommended content matched with the scene type.
According to the method and the device for learning, whether the user is matched with the question type at the current writing object and whether the current emotion type of the user is matched with the scene type of the environment where the user is located or not is judged, corresponding learning recommendation content is generated according to the judging result and the historical learning scene list, on one hand, the situation that the user falls into the negative attitude to learning and is in the emotion of learning is avoided, learning efficiency is improved, and therefore learning score of the user is effectively adjusted. On the other hand, the learning of the user can be pertinently assisted, and the learning interest of the user is improved. Finally, recommending proper learning recommendation contents for the user by combining the scene type, the topic type, the emotion type and the historical learning scene list of the current environment of the user, so that the user can complete learning of the corresponding learning contents in the proper environment and under the proper learning emotion, the learning efficiency is improved, and a better learning effect is achieved.
It should be noted that the above embodiments can be freely combined as needed. The foregoing is merely a preferred embodiment of the present invention and it should be noted that modifications and adaptations to those skilled in the art may be made without departing from the principles of the present invention, which are intended to be comprehended within the scope of the present invention.

Claims (8)

1. The learning auxiliary method is characterized by being applied to an intelligent device, wherein the intelligent device comprises a camera, a bracket and a luminous piece; the bracket is provided with a base, and a sensor array is arranged at the base; the method comprises the following steps:
acquiring point cloud data detected by the sensor array;
triggering a camera to shoot and acquire image data when the point cloud data is judged to meet a preset condition;
analyzing the image data to obtain scene types, topic types and emotion types;
searching corresponding learning recommendation contents according to the acquired historical learning score list, the topic type, the emotion type and the scene type, and recommending the learning recommendation contents to a user;
the sensor array comprises a pressure sensor array; the step of acquiring the point cloud data detected by the sensor array specifically comprises the following steps:
acquiring pressure point cloud data obtained by detection of the pressure sensor array;
when the point cloud data is judged to meet the preset condition, triggering the camera to shoot and acquire image data specifically comprises the following steps:
according to the pressure point cloud data, calculating to obtain the contact area between the hand of the user and the base;
Judging whether the contact area is kept unchanged in a preset time interval;
and triggering the camera to shoot and acquire the image data when the contact area is kept unchanged within a preset time interval.
2. The learning support method of claim 1, wherein the sensor array comprises a thermally sensitive infrared sensor array; the step of acquiring the point cloud data detected by the sensor array specifically comprises the following steps:
acquiring temperature point cloud data obtained by detection of the thermosensitive infrared sensor array;
and according to the temperature point cloud data, calculating to obtain the contact area between the hand of the user and the base.
3. The learning assistance method according to claim 1, wherein the analyzing the image data to obtain a scene type, a topic type, and a mood type specifically includes the steps of:
performing image processing on the image data, and extracting scene characteristics, topic types and facial characteristics of a user;
inputting the scene characteristics into a scene classification model to obtain corresponding scene types;
and inputting the facial features into a emotion classification model to obtain corresponding emotion types.
4. The learning assistance method according to any one of claims 1 to 3, wherein searching for corresponding learning recommendation contents according to the acquired history learning score list, the topic type, the emotion type, and the scene type, and recommending the learning recommendation contents to the user specifically includes the steps of:
Acquiring a learning content set and a historical learning score list of a user; the learning content set comprises a plurality of learning contents and preset scene types corresponding to each learning content; the history learning score list comprises learning scores corresponding to each question type;
judging whether the topic type is matched with the emotion type or not;
when the topic type is not matched with the emotion type, searching a candidate learning content set with learning score matched with the emotion type and matched with the scene type according to the historical learning score list and the learning content set, and acquiring learning recommendation content matched with the emotion type from the candidate learning content set;
when the topic type is matched with the emotion type, judging whether the topic type is matched with the scene type or not;
and searching for learning recommendation content matched with the scene type when the topic type is not matched with the scene type.
5. An intelligent device is characterized by comprising a camera, a bracket and a luminous piece; the bracket is provided with a base, and a sensor array is arranged at the base; further comprises: the system comprises an acquisition module, a control module, an analysis module and a processing module;
The acquisition module is used for acquiring the point cloud data detected by the sensor array;
the control module is connected with the acquisition module and the camera and is used for triggering the camera to shoot and acquire image data when the point cloud data are judged to meet the preset conditions;
the analysis module is connected with the control module and used for analyzing the image data to obtain scene types, topic types and emotion types;
the processing module is connected with the analysis module and used for searching corresponding learning recommendation content according to the acquired historical learning score list, the topic type, the emotion type and the scene type and recommending the learning recommendation content to a user;
the sensor array comprises a pressure sensor array; the acquisition module comprises: a pressure point cloud acquisition unit; the control module includes: the device comprises a first calculation unit, a first judgment unit and a first control unit;
the pressure point cloud acquisition unit is used for acquiring pressure point cloud data detected by the pressure sensor array;
the first calculation unit is connected with the pressure point cloud acquisition unit and is used for calculating and obtaining the contact area between the hand of the user and the base according to the pressure point cloud data;
The first judging unit is connected with the first calculating unit and is used for judging whether the contact area is kept unchanged within a preset time interval;
the first control unit is connected with the first judging unit and is used for triggering the camera to shoot and acquire the image data when the contact area is kept unchanged within a preset time interval.
6. The smart device of claim 5, wherein the sensor array comprises a thermally sensitive infrared sensor array; the acquisition module comprises: a temperature point cloud acquisition unit; the control module includes: the second computing unit, the second judging unit and the second control unit;
the temperature point cloud acquisition unit is used for acquiring temperature point cloud data detected by the thermosensitive infrared sensor array;
the second calculation unit is connected with the temperature point cloud acquisition unit and is used for calculating and obtaining the contact area between the hand of the user and the base according to the temperature point cloud data;
the second judging unit is connected with the second calculating unit and is used for judging whether the contact area is kept unchanged within a preset time interval;
the second control unit is connected with the second judging unit and is used for triggering the camera to shoot and acquire the image data when the contact area is kept unchanged within a preset time interval.
7. The smart device of claim 5, wherein the analysis module comprises: the system comprises an extraction unit, a scene type classification unit and an emotion type classification unit;
the extraction unit is used for carrying out image processing on the image data and extracting scene characteristics, question types and facial characteristics of a user;
the scene type classification unit is connected with the extraction unit and is used for inputting the scene characteristics into a scene classification model to obtain corresponding scene types;
the emotion type classification unit is connected with the extraction unit and is used for inputting the facial features into the emotion classification model to obtain corresponding emotion types.
8. The smart device of any one of claims 5-7, wherein:
the acquisition module is also used for acquiring a learning content set and a historical learning score list of the user; the learning content set comprises a plurality of learning contents and preset scene types corresponding to each learning content; the history learning score list comprises learning scores corresponding to each question type;
the processing module is connected with the acquisition module and is also used for judging whether the topic type is matched with the emotion type; when the topic type is not matched with the emotion type, searching a candidate learning content set with learning score matched with the emotion type and matched with the scene type according to the historical learning score list and the learning content set, and acquiring learning recommendation content matched with the emotion type from the candidate learning content set; when the topic type is matched with the emotion type, judging whether the topic type is matched with the scene type or not; and searching for learning recommendation content matched with the scene type when the topic type is not matched with the scene type.
CN201910508176.0A 2019-06-12 2019-06-12 Learning assisting method and intelligent device Active CN112084814B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910508176.0A CN112084814B (en) 2019-06-12 2019-06-12 Learning assisting method and intelligent device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910508176.0A CN112084814B (en) 2019-06-12 2019-06-12 Learning assisting method and intelligent device

Publications (2)

Publication Number Publication Date
CN112084814A CN112084814A (en) 2020-12-15
CN112084814B true CN112084814B (en) 2024-02-23

Family

ID=73733399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910508176.0A Active CN112084814B (en) 2019-06-12 2019-06-12 Learning assisting method and intelligent device

Country Status (1)

Country Link
CN (1) CN112084814B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113010725B (en) * 2021-03-17 2023-12-26 平安科技(深圳)有限公司 Musical instrument selection method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160143422A (en) * 2015-06-05 2016-12-14 (주)인클라우드 Smart education system based on learner emotion
KR20170002100A (en) * 2015-06-29 2017-01-06 김영자 Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same
CN106682090A (en) * 2016-11-29 2017-05-17 上海智臻智能网络科技股份有限公司 Active interaction implementing device, active interaction implementing method and intelligent voice interaction equipment
CN206672401U (en) * 2017-03-30 2017-11-24 广州贝远信息技术有限公司 A kind of student individuality learning service system
CN109597943A (en) * 2018-12-17 2019-04-09 广东小天才科技有限公司 Learning content recommendation method based on scene and learning equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160143422A (en) * 2015-06-05 2016-12-14 (주)인클라우드 Smart education system based on learner emotion
KR20170002100A (en) * 2015-06-29 2017-01-06 김영자 Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same
CN106682090A (en) * 2016-11-29 2017-05-17 上海智臻智能网络科技股份有限公司 Active interaction implementing device, active interaction implementing method and intelligent voice interaction equipment
CN206672401U (en) * 2017-03-30 2017-11-24 广州贝远信息技术有限公司 A kind of student individuality learning service system
CN109597943A (en) * 2018-12-17 2019-04-09 广东小天才科技有限公司 Learning content recommendation method based on scene and learning equipment

Also Published As

Publication number Publication date
CN112084814A (en) 2020-12-15

Similar Documents

Publication Publication Date Title
CN108399376A (en) Student classroom learning interest intelligent analysis method and system
US8793118B2 (en) Adaptive multimodal communication assist system
CN108898115B (en) Data processing method, storage medium and electronic device
Benalcázar et al. Real-time hand gesture recognition based on artificial feed-forward neural networks and EMG
CN110134863B (en) Application program recommendation method and device
Camgöz et al. Sign language recognition for assisting the deaf in hospitals
US11687467B2 (en) Data sharing system and data sharing method therefor
CN109933650B (en) Method and system for understanding picture title in operation
CN108647657A (en) A kind of high in the clouds instruction process evaluation method based on pluralistic behavior data
TWI776566B (en) Action recognition method, computer equipment and computer readable storage medium
CN107578015B (en) First impression recognition and feedback system and method based on deep learning
CN111401105A (en) Video expression recognition method, device and equipment
CN112084814B (en) Learning assisting method and intelligent device
Villegas-Ch et al. Identification of emotions from facial gestures in a teaching environment with the use of machine learning techniques
Fauzi et al. Recognition of real-time bisindo sign language-to-speech using machine learning methods
CN110598607B (en) Non-contact and contact cooperative real-time emotion intelligent monitoring system
CN111077993B (en) Learning scene switching method, electronic equipment and storage medium
CN112307453A (en) Personnel management method and system based on face recognition
US20220406217A1 (en) Deep learning-based pedagogical word recommendation system for predicting and improving vocabulary skills of foreign language learners
CN104866470A (en) Word query method based on eyeballs of user
US11210335B2 (en) System and method for judging situation of object
Bogdan et al. Intelligent assistant for people with low vision abilities
CN112686118A (en) Interaction model for adjusting user emotion based on adaptive interface
Elbarougy et al. Continuous audiovisual emotion recognition using feature selection and lstm
CN114557544B (en) Use method of multifunctional learning table

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant