CN112084814A - Learning auxiliary method and intelligent device - Google Patents

Learning auxiliary method and intelligent device Download PDF

Info

Publication number
CN112084814A
CN112084814A CN201910508176.0A CN201910508176A CN112084814A CN 112084814 A CN112084814 A CN 112084814A CN 201910508176 A CN201910508176 A CN 201910508176A CN 112084814 A CN112084814 A CN 112084814A
Authority
CN
China
Prior art keywords
type
learning
scene
emotion
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910508176.0A
Other languages
Chinese (zh)
Other versions
CN112084814B (en
Inventor
杨昊民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201910508176.0A priority Critical patent/CN112084814B/en
Publication of CN112084814A publication Critical patent/CN112084814A/en
Application granted granted Critical
Publication of CN112084814B publication Critical patent/CN112084814B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Marketing (AREA)
  • Primary Health Care (AREA)
  • General Business, Economics & Management (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a learning auxiliary method and an intelligent device, wherein the method comprises the following steps: the intelligent device comprises a camera, a bracket and a light-emitting piece; the bracket is provided with a base, and a sensor array is arranged at the base; the method comprises the following steps: acquiring point cloud data detected by a sensor array; triggering a camera to shoot and acquire image data when the point cloud data are judged to meet the preset conditions; analyzing the image data to obtain a scene type, a theme type and an emotion type; and searching corresponding learning recommendation content according to the acquired historical learning achievement list, the acquired topic type, the acquired emotion type and the acquired scene type, and recommending the learning recommendation content to the user. The method and the device recommend proper learning recommendation contents for the user so as to ensure that the user completes learning of corresponding learning contents in a proper environment and under a proper learning emotion, improve learning efficiency and achieve a better learning effect.

Description

Learning auxiliary method and intelligent device
Technical Field
The invention relates to the technical field of intelligent teaching, in particular to a learning auxiliary method and an intelligent device.
Background
With the development of computer networks, the education field is increasingly influenced by the computer networks to develop online and remote teaching, and the teaching is moved from an offline class to an online class.
However, in a conventional scheme for recommending learning content to a user, the corresponding learning content is often recommended according to the learning result of the user, but it can be found in daily learning life that different learning scenes and learning emotions have certain influence on the learning efficiency of each person. For example, when the emotion of the user is not good, learning content requiring high attention in learning complicated mathematics, physics, and the like may be affected. When the user is in a quiet library, the user should not practice the spoken language learning content. The existing learning content recommendation method is not adaptive to environment and emotion of a user.
Disclosure of Invention
The invention aims to provide a learning auxiliary method and an intelligent device, which can recommend proper learning recommendation contents for a user, ensure that the user completes the learning of corresponding learning contents in a proper environment and under a proper learning emotion, improve the learning efficiency and achieve a better learning effect.
The technical scheme provided by the invention is as follows:
the invention provides a learning auxiliary method, which comprises the following steps: the intelligent device comprises a camera, a bracket and a light-emitting piece; the bracket is provided with a base, and a sensor array is arranged at the base; the method comprises the following steps:
acquiring point cloud data detected by the sensor array;
triggering a camera to shoot and acquire image data when the point cloud data are judged to meet preset conditions;
analyzing the image data to obtain a scene type, a theme type and an emotion type;
and searching corresponding learning recommendation content according to the acquired historical learning achievement list, the theme type, the emotion type and the scene type, and recommending the learning recommendation content to a user.
Further, the sensor array comprises a pressure sensor array; the step of acquiring the point cloud data detected by the sensor array specifically comprises the steps of:
acquiring pressure point cloud data detected by the pressure sensor array;
when the point cloud data are judged to accord with the preset conditions, triggering a camera to shoot and acquire image data specifically comprises the following steps:
calculating to obtain the contact area between the hand of the user and the base according to the pressure point cloud data;
judging whether the contact area is kept unchanged within a preset time interval;
and when the contact area is kept unchanged within a preset time interval, triggering the camera to shoot and acquire the image data.
Further, the sensor array comprises a thermally sensitive infrared sensor array; the step of acquiring the point cloud data detected by the sensor array specifically comprises the steps of:
acquiring temperature point cloud data detected by the thermosensitive infrared sensor array;
when the point cloud data are judged to accord with the preset conditions, triggering a camera to shoot and acquire image data specifically comprises the following steps:
calculating to obtain the contact area between the hand of the user and the base according to the temperature point cloud data;
judging whether the contact area is kept unchanged within a preset time interval;
and when the contact area is kept unchanged within a preset time interval, triggering the camera to shoot and acquire the image data.
Further, the analyzing the image data to obtain a scene type, a theme type and an emotion type specifically includes the steps of:
performing image processing on the image data, and extracting scene features, title types and facial features of users;
inputting the scene features into a scene classification model to obtain corresponding scene types;
and inputting the facial features into a sentiment classification model to obtain a corresponding sentiment type.
Further, the step of searching for corresponding learning recommendation content according to the acquired historical learning achievement list, the topic type, the emotion type and the scene type, and recommending the learning recommendation content to the user specifically includes the steps of:
acquiring a learning content set and a historical learning score list of a user; the learning content set comprises a plurality of learning contents and a preset scene type corresponding to each learning content; the historical learning achievement list comprises learning achievement corresponding to each topic type;
judging whether the theme type is matched with the emotion type;
when the theme type is not matched with the emotion type, searching a candidate learning content set which is matched with the scene type and learning achievement and emotion type according to the historical learning achievement list and the learning content set, and acquiring learning recommendation content matched with the emotion type from the candidate learning content set;
when the theme type is matched with the emotion type, judging whether the theme type is matched with the scene type;
and when the theme type is not matched with the scene type, searching for the learning recommendation content matched with the scene type.
The invention also provides an intelligent device, which comprises a camera, a bracket and a light-emitting piece; the bracket is provided with a base, and a sensor array is arranged at the base; further comprising: the device comprises an acquisition module, a control module, an analysis module and a processing module;
the acquisition module is used for acquiring point cloud data detected by the sensor array;
the control module is connected with the acquisition module and the camera and used for triggering the camera to shoot and acquire image data when the point cloud data is judged to meet the preset conditions;
the analysis module is connected with the control module and used for analyzing the image data to obtain a scene type, a theme type and an emotion type;
and the processing module is connected with the analysis module and used for searching corresponding learning recommendation content according to the acquired historical learning achievement list, the topic type, the emotion type and the scene type and recommending the learning recommendation content to a user.
Further, the sensor array comprises a pressure sensor array; the acquisition module includes: a pressure point cloud obtaining unit; the control module includes: the device comprises a first calculating unit, a first judging unit and a first control unit;
the pressure point cloud acquisition unit is used for acquiring pressure point cloud data detected by the pressure sensor array;
the first calculation unit is connected with the pressure point cloud acquisition unit and used for calculating the contact area between the hand of the user and the base according to the pressure point cloud data;
the first judging unit is connected with the first calculating unit and is used for judging whether the contact area is kept unchanged within a preset time interval;
the first control unit is connected with the first judging unit and used for triggering the camera to shoot and acquire the image data when the contact area is kept unchanged within a preset time interval.
Further, the sensor array comprises a thermally sensitive infrared sensor array; the acquisition module includes: a temperature point cloud obtaining unit; the control module includes: a second calculating unit, a second judging unit and a second control unit;
the temperature point cloud acquisition unit is used for acquiring temperature point cloud data detected by the thermosensitive infrared sensor array;
the second calculation unit is connected with the temperature point cloud acquisition unit and used for calculating the contact area between the hand of the user and the base according to the temperature point cloud data;
the second judging unit is connected with the second calculating unit and is used for judging whether the contact area is kept unchanged within a preset time interval;
the second control unit is connected with the second judging unit and used for triggering the camera to shoot and acquire the image data when the contact area is kept unchanged within a preset time interval.
Further, the analysis module comprises: the device comprises an extraction unit, a scene type classification unit and an emotion type classification unit;
the extraction unit is used for carrying out image processing on the image data and extracting scene features, title types and facial features of users;
the scene type classification unit is connected with the extraction unit and is used for inputting the scene features into a scene classification model to obtain corresponding scene types;
and the emotion type classification unit is connected with the extraction unit and is used for inputting the facial features into an emotion classification model to obtain corresponding emotion types.
Further, the obtaining module is further configured to obtain a learning content set and a historical learning achievement list of the user; the learning content set comprises a plurality of learning contents and a preset scene type corresponding to each learning content; the historical learning achievement list comprises learning achievement corresponding to each topic type;
the processing module is connected with the acquisition module and is also used for judging whether the theme type is matched with the emotion type; when the theme type is not matched with the emotion type, searching a candidate learning content set which is matched with the scene type and learning achievement and emotion type according to the historical learning achievement list and the learning content set, and acquiring learning recommendation content matched with the emotion type from the candidate learning content set; when the theme type is matched with the emotion type, judging whether the theme type is matched with the scene type; and when the theme type is not matched with the scene type, searching for the learning recommendation content matched with the scene type.
Through the learning auxiliary method and the intelligent device provided by the invention, the appropriate learning recommendation content can be recommended for the user, so that the user can complete the learning of the corresponding learning content in the appropriate environment and under the appropriate learning emotion, the learning efficiency is improved, and a better learning effect is achieved.
Drawings
The above features, technical features, advantages and implementations of a learning assistance method and an intelligent device will be further described in the following detailed description of preferred embodiments in a clearly understandable manner, with reference to the accompanying drawings.
FIG. 1 is a flow diagram of one embodiment of a learning aid method of the present invention;
FIG. 2 is a flow chart of another embodiment of a learning aid method of the present invention;
FIG. 3 is a flow chart of another embodiment of a learning aid method of the present invention;
FIG. 4 is a flow chart of another embodiment of a learning aid method of the present invention;
FIG. 5 is a flow chart of another embodiment of a learning aid method of the present invention;
fig. 6 is a schematic structural diagram of an embodiment of an intelligent device of the present invention.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled. In this document, "one" means not only "only one" but also a case of "more than one".
One embodiment of the present invention, as shown in fig. 1, is a learning assistance method including: the intelligent device comprises a camera, a bracket and a light-emitting piece; the bracket is provided with a base, and a sensor array is arranged at the base; the method comprises the following steps:
s100, point cloud data detected by a sensor array are obtained;
s200, when the point cloud data are judged to accord with the preset conditions, triggering a camera to shoot to obtain image data;
s300, analyzing the image data to obtain a scene type, a theme type and an emotion type;
s400, searching corresponding learning recommendation contents according to the acquired historical learning achievement list, the acquired theme type, the acquired emotion type and the acquired scene type, and recommending the learning recommendation contents to the user.
Specifically, the intelligent device is equipped with camera, support, illuminating part and base for intelligent desk lamp, intelligent desk etc. and base department is provided with sensor array's device, and sensor array is a combination of a plurality of sensor element, generally is special geometric distribution, can be more comprehensive obtain each sensor element's detection data, and a large amount of detection data form a cloud data. The writing object includes a book, or an area where a tablet can write, and the like.
After the intelligent device obtains the point cloud data, the point cloud data is analyzed, whether the point cloud data meets preset conditions is judged, for example, whether each corresponding detection data in the point cloud data is larger than a default value is judged according to the preset conditions, and once the detection data is larger than the default value, the point cloud data is judged to meet the preset conditions. For example, when the preset condition is to determine whether the corresponding detection data in the point cloud data changes, once the detection data changes, it is determined that the point cloud data meets the preset condition. When the intelligent device judges that the point cloud data accord with the preset conditions according to the point cloud data, the intelligent device sends an instruction to a camera of the intelligent device, and the camera starts to work according to the instruction, so that the lens of the camera is adjusted to face the user, a writing object and the environment where the user is located, and image data corresponding to the writing content, the user face and the environment where the user is located are obtained. Preferably, in order to reduce the workload of the smart device, first image data including written contents, second image data including the face of the user, and third image data including the environment in which the user is located are captured and acquired, respectively.
The intelligent device carries out image processing on the image data, and identifies all information in the image data to obtain a scene type, a theme type and an emotion type. The intelligent device searches for the learning recommendation content corresponding to the type on the local or network, and then recommends the searched and obtained learning recommendation content to the user, so that the user learns according to the learning recommendation content.
Preferably, the camera in the intelligent device comprises a lens, a main control chip and an artificial intelligence chip, the main control chip is connected with the lens and the artificial intelligence chip, the lens can rotate within a preset angle range, if the lens rotates within 120 degrees, the main control chip identifies the characterization information, and then the angle of the lens is adjusted to shoot to obtain image data, the characterization information refers to directional information which can be used for characterizing the characteristics of objects, such as human faces, objects (such as bookshelves in libraries, blackboard tables and chairs in classrooms, beds in bedrooms at home, and the like) and writing objects, so that the camera is controlled to intelligently grab the image data, interference of other external factors is reduced, the operation pressure of the intelligent device at the rear end can be greatly reduced, and the processing efficiency of the whole process is improved.
Through the embodiment, learning recommendation contents matched with emotion types and corresponding scene types can be intelligently recommended to the user through the intelligent device according to the historical learning achievement list, the purpose of intelligent tutoring can be achieved, appropriate learning contents can be recommended for the user, learning of corresponding knowledge point contents can be guaranteed to be completed by the user under the appropriate environment and the appropriate emotion, learning efficiency is improved, and a better learning effect is achieved.
One embodiment of the present invention, as shown in fig. 2, is a learning assistance method, including: the intelligent device comprises a camera, a bracket and a light-emitting piece; the bracket is provided with a base, and the sensor array comprises a pressure sensor array; the method comprises the following steps:
s110, acquiring pressure point cloud data detected by a pressure sensor array;
s210, calculating to obtain the contact area between the hand of the user and the base according to the pressure point cloud data;
s220, when the contact area is kept unchanged within a preset time interval, triggering a camera to shoot and acquire image data;
s300, analyzing the image data to obtain a scene type, a theme type and an emotion type;
s400, searching corresponding learning recommendation contents according to the acquired historical learning achievement list, the acquired theme type, the acquired emotion type and the acquired scene type, and recommending the learning recommendation contents to the user.
Specifically, when the user presses the base, each pressure sensor in the pressure sensor array arranged at the base can be influenced by the degree of pressing by the user, so that the pressure sensors obtain pressure values detected respectively, and a large number of pressure values detected by the plurality of pressure sensors at the same moment obtain pressure point cloud data. Then, the intelligent device calculates according to the pressure point cloud data to obtain a contact area between the body of the user and the base, whether the contact area is kept unchanged within a preset time interval needs to be judged, if the contact area is kept unchanged within the preset time interval, it is indicated that the user has an intention to learn under the intelligent device, then the intelligent device sends an instruction to a camera of the intelligent device, so that the camera starts to work according to the instruction, and the lens of the camera is adjusted to face the face of the user, a writing object and the environment where the user is located, so that corresponding image data are obtained.
Through this embodiment, can automatic trigger the camera and carry out work, need not the camera constantly to keep shooting the state to reduce the power consumption, resources are saved.
One embodiment of the present invention, as shown in fig. 3, is a learning assistance method including: the intelligent device comprises a camera, a bracket and a light-emitting piece; the bracket is provided with a base, and the sensor array comprises a thermosensitive infrared sensor array; the method comprises the following steps:
s120, acquiring temperature point cloud data detected by a thermosensitive infrared sensor array;
s230, calculating the contact area between the hand of the user and the base according to the temperature point cloud data;
s240, when the contact area is kept unchanged within a preset time interval, triggering a camera to shoot and acquire image data;
s300, analyzing the image data to obtain a scene type, a theme type and an emotion type;
s400, searching corresponding learning recommendation contents according to the acquired historical learning achievement list, the acquired theme type, the acquired emotion type and the acquired scene type, and recommending the learning recommendation contents to the user.
Specifically, the heat-sensitive infrared sensor is a sensor that performs measurement using physical properties of infrared rays. Any substance, as long as it is itself above absolute zero, can radiate infrared rays. The thermosensitive infrared sensor comprises a thermistor, the temperature of the thermistor rises when the thermistor is radiated by infrared rays, the resistance changes, and the resistance is changed into an electric signal through a conversion circuit to be output, so that a corresponding temperature value is obtained through detection.
When a user presses the base, each thermosensitive infrared sensor in the thermosensitive infrared sensor array arranged at the base is influenced by the pressing intensity of the user, so that the thermosensitive infrared sensors obtain temperature values respectively detected, and a large number of temperature values detected by the plurality of thermosensitive infrared sensors at the same time obtain temperature point cloud data. Then, the intelligent device calculates according to the temperature point cloud data to obtain a contact area between the body of the user and the base, whether the contact area is kept unchanged within a preset time interval needs to be judged, if the contact area is kept unchanged within the preset time interval, it is indicated that the user has an intention to learn under the intelligent device, then the intelligent device sends an instruction to a camera of the intelligent device, so that the camera starts to work according to the instruction, and the lens of the camera is adjusted to face the face of the user, a writing object and the environment where the user is located, so that corresponding image data are obtained.
Through this embodiment, can automatic trigger the camera and carry out work, need not the camera constantly to keep shooting the state to reduce the power consumption, resources are saved.
Preferably, based on the above embodiment, after calculating the contact area between the hand of the user and the base, before determining whether the contact area remains unchanged within the preset time interval, the method further includes:
comparing the contact area with a preset area range to match the identity type of the user;
and when the identity type is teenagers, acquiring the current time, and if the current time is within the preset learning time period, prohibiting the entertainment content from being displayed to the teenagers and only displaying the learning content by the intelligent device.
When the identity type is adult, the entertainment content or the learning content is displayed to the teenager when the identity type is adult.
Illustratively, when the identity type of the user is a child, and the child presses the pressure sensor array with a finger (or a palm), a child finger area range (or a child palm area range) is predefined, and the child finger area range (or the child palm area range) is obtained by averaging a plurality of sizes of the child finger (or palm) areas. When the finger (or palm) of the child presses the base, the pressure sensor array arranged at the base can measure the pressure of the user pressing down at each environment, the finger area (or palm area) of the child can be measured according to the pressure point cloud data, and when the finger (or palm) of the child leaves the base, the pressure value detected by the pressure sensor array arranged at the base returns to a default value. After the finger (or palm) area of the child is measured, when the finger (or palm) area of the child is within the finger area range (or the palm area range) of the child, the user can be roughly judged to be the child, so that the use authority of the child on the intelligent device can be limited, the identity type is guaranteed to be under the condition of teenagers, the teenagers can effectively learn within the learning time period, and the learning achievement of the teenagers is improved.
One embodiment of the present invention, as shown in fig. 4, is a learning assistance method including: the intelligent device comprises a camera, a bracket and a light-emitting piece; the bracket is provided with a base, and a sensor array is arranged at the base; the method comprises the following steps:
s100, point cloud data detected by a sensor array are obtained;
s200, when the point cloud data are judged to accord with the preset conditions, triggering a camera to shoot to obtain image data;
s310, image processing is carried out on the image data, and scene features, title types and facial features of users are extracted;
s320, inputting the scene characteristics into a scene classification model to obtain a corresponding scene type;
s330, inputting the facial features into the emotion classification model to obtain corresponding emotion types;
s400, searching corresponding learning recommendation contents according to the acquired historical learning achievement list, the acquired theme type, the acquired emotion type and the acquired scene type, and recommending the learning recommendation contents to the user.
Specifically, when the camera shoots an image data, the camera sends the image data to the processor, and after the processor receives the image data, the processor performs gray processing on the obtained image data, equalizes a histogram of the gray image, reduces the information content in the image, and accelerates the detection speed.
When the image data simultaneously comprises written content, the face of the user and the environment of the user, a face image, a title image and a scene image are extracted from the image data after image processing (namely gray scale and histogram equalization) by using a recognition algorithm. Specifically, the recognition algorithm for extracting the face image, the title image and the scene image from the image data includes: extraction based on geometric feature recognition, extraction of local feature analysis, extraction of neural networks, and the like. After the facial image is extracted, the facial features of the user are identified according to the extracted facial image, and then the facial features are input into the emotion classification model to obtain the corresponding emotion types. The facial features include: eyes, eyebrows, nose, mouth, environmental feature points of the outer contour of the face, wherein the environmental feature points of the eyes comprise the environmental feature points of the eyeballs. And identifying the subject types (the problem subject type and the simple subject type, wherein the problem subject type comprises an unqualified subject, a subject type, a knowledge point and the like, and the simple subject type comprises an unqualified subject, a subject type, a knowledge point and the like) according to the extracted subject images. Scene features (table and chair, blackboard, bed, bookshelf and the like) are identified according to the extracted scene images, and then the scene features are input into a scene classification model to obtain corresponding scene types (including classrooms, study rooms, libraries, playgrounds and the like). Similarly, when any piece of image data includes a plurality of contents, for example, when the image data includes both the face of the user and the environment where the user is located, a face image and a scene image are extracted from the image data after the image processing (i.e., gray scale and histogram equalization) by using a recognition algorithm.
When the first image data comprises writing contents, the second image data comprises a user face, and the third image data comprises an environment where the user is located, facial features of the user are identified according to the extracted facial image, and then the facial features are input into the emotion classification model to obtain corresponding emotion types. The facial features include: eyes, eyebrows, nose, mouth, environmental feature points of the outer contour of the face, wherein the environmental feature points of the eyes comprise the environmental feature points of the eyeballs. And identifying the subject types (the problem subject type and the simple subject type, wherein the problem subject type comprises an unqualified subject, a subject type, a knowledge point and the like, and the simple subject type comprises an unqualified subject, a subject type, a knowledge point and the like) according to the extracted subject images. Scene features (table and chair, blackboard, bed, bookshelf and the like) are identified according to the extracted scene images, and then the scene features are input into a scene classification model to obtain corresponding scene types (including classrooms, study rooms, libraries, playgrounds and the like).
The method for training to obtain the emotion classification model comprises the following steps: establishing a sample library with X face images, manually marking u face features in each face image to obtain a face feature training set and a face feature verification set, manually marking the face features to obtain an emotion type training set and an emotion type verification set corresponding to the face feature verification set, wherein the emotion type training set or the emotion type verification set respectively comprise difficulty-1, vitality-2, depression-3, anxiety-4, dislike-5, happiness-6, happiness-7 and the like. And taking the facial feature training set as input and the emotion type training set as output, and training the initial recognition model for multiple times to obtain an emotion classification model.
The method for training to obtain the scene classification model comprises the following steps: establishing a sample library with X scene images, manually marking u scene features in each scene image to obtain a scene feature training set and a scene feature verification set, manually marking according to the scene features to obtain a scene type training set and a scene type verification set corresponding to the scene feature verification set, wherein the scene type training set or the scene type verification set respectively comprise a classroom-1, a study-2, a library-3, a playground-4 and the like. And taking the scene characteristic training set as input and the scene type training set as output, and training the initial recognition model for multiple times to obtain a scene classification model.
In this embodiment, the trained emotion classification models are tested by using the facial feature verification set and the emotion type verification set corresponding to the facial feature verification set, and one recognition model with the best recognition effect among the trained emotion classification models is selected as the final emotion classification model, so that the recognition effect can be the highest classification accuracy, the fastest convergence rate and the like. And in the same way, obtaining the recognition model with the best recognition effect as the final scene classification model. And continuously updating and training the emotion classification model and the scene classification model in the subsequent use process, so that the accuracy of emotion judgment and scene judgment is improved.
According to the embodiment, the emotion type corresponding to the current emotion of the user, the scene type corresponding to the scene where the user is located and the question type corresponding to the learning content of the user at present can be automatically identified, so that the learning recommendation content matched with the emotion type and the scene type is intelligently recommended to the user according to the historical learning achievement list, the purpose of intelligent tutoring can be achieved, the appropriate learning content can be recommended to the user, the user can be guaranteed to complete learning of the corresponding knowledge point content under the appropriate environment and the appropriate emotion, the learning efficiency is improved, and the better learning effect is achieved.
One embodiment of the present invention, as shown in fig. 5, is a learning assistance method including: the intelligent device comprises a camera, a bracket and a light-emitting piece; the bracket is provided with a base, and a sensor array is arranged at the base; the method comprises the following steps:
s100, point cloud data detected by a sensor array are obtained;
s200, when the point cloud data are judged to accord with the preset conditions, triggering a camera to shoot to obtain image data;
s300, analyzing the image data to obtain a scene type, a theme type and an emotion type;
s410, acquiring a learning content set and a historical learning achievement list of a user; the learning content set comprises a plurality of learning contents and a preset scene type corresponding to each learning content; the historical learning achievement list comprises learning achievement corresponding to each topic type;
s420, judging whether the theme type is matched with the emotion type;
s430, when the topic type is not matched with the emotion type, searching a candidate learning content set which is matched with the emotion type and is matched with the scene type according to the historical learning achievement list and the learning content set, and acquiring learning recommendation content matched with the emotion type from the candidate learning content set;
s440, when the topic type is matched with the emotion type, judging whether the topic type is matched with the scene type;
s450, when the topic type does not match with the scene type, the learning recommendation content matched with the scene type is searched.
Specifically, a learning content set is obtained in advance by manually identifying the scene type of each learning content adapted to learning. For example, the type of the scene corresponding to reading the learning content is study room, playground or classroom, and the type of the scene corresponding to reading the learning content is study room, library, playground or classroom. The learning content set is established according to the method.
The method comprises the steps that a user learns and shoots image data under an intelligent device, and the image data are analyzed to obtain the current learning emotion, namely emotion type, of the user, the theme type corresponding to the current learning content of the user and the scene type of the current environment where the user is located. And judging whether the topic type is matched with the emotion type, namely whether the problem topic type (the topic type which is not good at the user) corresponds to a positive emotion type (comprising positive emotion types such as happy and happy), and the simple topic type (the topic type which is good at the user) corresponds to a negative emotion type (comprising negative emotion types such as too much, angry, depression, worry and aversion).
When the topic type does not match the emotion type, for example, the problem type corresponds to a negative emotion type, it may be determined that the user is in a negative attitude for learning, and at this time, a first candidate learning content set matching the scene type (i.e., a candidate learning content set that is good for the user and matches the scene type) is generated in combination with the historical learning achievement list, and then a first learning recommendation content matching the negative emotion type is searched from the generated first candidate learning content set. Therefore, the user is in the passive emotion type and does not have the interest of attacking or exercising the ineffectual learning content, the learning efficiency of the user can be reduced due to the passive emotion of learning, the first learning recommendation content is recommended to the user by the intelligent device at the moment, the user can keep the learning state, the learning confidence of the user can be improved due to the fact that the user learns the learning content which is adept by the user, the passive emotion type is adjusted to be the positive emotion type, the learning content with high reinforcement score can be consolidated, the score can be further improved, and the effect of improving the learning effect and the learning score of the user can be achieved.
When the topic type does not match the emotion type, for example, the simple topic type corresponds to an active emotion type, it may be determined that the user is in an active attitude for learning, and at this time, a second candidate learning content set matching the scene type (i.e., a candidate learning content set that is not good for the user and matches the scene type) is generated in combination with the historical learning achievement list, and then a second learning recommendation content matching the active emotion type is searched from the generated second candidate learning content set. Therefore, the user is in the positive emotion type, the user can have the interest of attacking or exercising the learning content which is not good at, the learning efficiency of the user can be improved actively due to the learning emotion, the second learning recommendation content is recommended to the user by the intelligent device, the knowledge points and the problem types with low training score can be strengthened by the user in the state of positive learning emotion, and the learning effect and the learning score of the user are improved.
When the topic type is matched with the emotion type, for example, when the simple topic type corresponds to a negative emotion type, whether the topic type is matched with the scene type is judged, when the problem type corresponds to a positive emotion type, whether the topic type is matched with the scene type is further judged, and when the topic type is matched with the scene type, the user continues to use the learning content displayed by the writing object for learning. And when the theme type does not match with the scene type, searching third learning recommendation content matched with the scene type.
According to the embodiment, whether the question type of the current writing object of the user is matched with the scene type of the environment where the user is located and the current emotion type of the user are matched with the scene type of the environment where the user is located is judged, and corresponding learning recommendation content is generated according to the judgment result and the historical learning scene list, so that on one hand, the situation that the user is left in a negative attitude for learning and then a boring emotion is generated is avoided, the learning efficiency is improved, and therefore the learning score of the user is effectively adjusted. On the other hand, the method can help the user to learn in a targeted manner, and the learning interest of the user is improved. And finally, recommending proper learning recommendation contents for the user by combining the scene type, the topic type, the emotion type and the historical learning scene list of the current environment where the user is located, so that the user can be ensured to complete the learning of the corresponding learning contents in the proper environment and under the proper learning emotion, the learning efficiency is improved, and a better learning effect is achieved.
One embodiment of the present invention, as shown in fig. 6, is an intelligent device 1, which includes a camera 15, a bracket and a light emitting member; the bracket is provided with a base, and a sensor array is arranged at the base; further comprising: the system comprises an acquisition module 11, a control module 12, an analysis module 13 and a processing module 14;
the acquisition module 11 is used for acquiring point cloud data detected by the sensor array;
the control module 12 is connected with the acquisition module 11 and the camera 15 and is used for triggering the camera 15 to shoot and acquire image data when the point cloud data is judged to meet the preset conditions;
the analysis module 13 is connected with the control module 12 and used for analyzing the image data to obtain a scene type, a theme type and an emotion type;
and the processing module 14 is connected with the analysis module 13 and is used for searching corresponding learning recommendation contents according to the acquired historical learning achievement list, the acquired theme type, the acquired emotion type and the acquired scene type and recommending the learning recommendation contents to the user.
Specifically, the intelligent device 1 is provided with a camera, a support, a light emitting piece and a base for an intelligent desk lamp, an intelligent desk and the like, the base is provided with a sensor array, the sensor array is a combination of a plurality of sensor elements and generally has special geometric distribution, detection data of each sensor element can be obtained more comprehensively, and a large amount of detection data form point cloud data. The writing object includes a book, or an area where a tablet can write, and the like.
After the intelligent device 1 obtains the point cloud data, the point cloud data is analyzed to determine whether the point cloud data meets a preset condition, for example, if the preset condition is to determine whether each corresponding detection data in the point cloud data is greater than a default value, it is determined that the point cloud data meets the preset condition once the detection data is greater than the default value. For example, when the preset condition is to determine whether the corresponding detection data in the point cloud data changes, once the detection data changes, it is determined that the point cloud data meets the preset condition. When the intelligent device 1 judges that the point cloud data meet the preset conditions according to the point cloud data, an instruction is sent to the camera 15 of the intelligent device 1, the camera 15 starts to work according to the instruction, and therefore the lens of the camera 15 is adjusted to face the user, a writing object and the environment where the user is located, and image data corresponding to the writing content, the face of the user and the environment where the user is located are obtained. Preferably, in order to reduce the workload of the smart device 1, first image data including written contents, second image data including the face of the user, and third image data including the environment in which the user is located are captured and acquired, respectively.
The intelligent device 1 performs image processing on the image data, and identifies all information in the image data to obtain a scene type, a topic type and an emotion type. The smart device 1 searches for the learning recommendation content corresponding to the above type on the local or network, and then the smart device 1 recommends the searched and obtained learning recommendation content to the user so that the user learns according to the learning recommendation content.
Preferably, the camera 15 in the intelligent device 1 includes a lens, a main control chip and an artificial intelligence chip, the main control chip is connected with the lens and the artificial intelligence chip, the lens can rotate within a preset angle range, for example, when the lens rotates within 120 degrees, the main control chip identifies the characterization information, and then the angle of the lens is adjusted to shoot to obtain image data, the characterization information refers to the directional information which can be used for characterizing the characteristics of the object, such as a human face, an article (e.g., a bookshelf in a library, a blackboard table and chair in a classroom, a bed in a bedroom at home, etc.), thereby controlling the camera 15 to intelligently grab the image data, reducing the interference of other external factors, and further reducing the operation pressure of the intelligent device 1 at the rear end to a greater extent, and further improving the processing efficiency of the whole process.
Through the embodiment, learning recommendation contents matched with emotion types and corresponding scene types can be intelligently recommended to the user through the intelligent device 1 according to the historical learning achievement list, the purpose of intelligent tutoring can be achieved, appropriate learning contents can be recommended for the user, learning of corresponding knowledge point contents can be guaranteed to be completed under the appropriate environment and the appropriate emotion by the user, learning efficiency is improved, and a better learning effect is achieved.
Based on the above embodiment, the sensor array comprises a pressure sensor array; the acquisition module 11 includes: a pressure point cloud obtaining unit; the control module 12 includes: the device comprises a first calculating unit, a first judging unit and a first control unit;
the pressure point cloud acquisition unit is used for acquiring pressure point cloud data detected by the pressure sensor array;
the first calculation unit is connected with the pressure point cloud acquisition unit and used for calculating the contact area between the hand of the user and the base according to the pressure point cloud data;
the first judgment unit is connected with the first calculation unit and used for judging whether the contact area is kept unchanged within a preset time interval;
and the first control unit is connected with the first judgment unit and used for triggering the camera 15 to shoot and acquire image data when the contact area is kept unchanged within a preset time interval.
Specifically, when the user presses the base, each pressure sensor in the pressure sensor array arranged at the base can be influenced by the degree of pressing by the user, so that the pressure sensors obtain pressure values detected respectively, and a large number of pressure values detected by the plurality of pressure sensors at the same moment obtain pressure point cloud data. Then, the intelligent device 1 calculates according to the pressure point cloud data to obtain a contact area between the body of the user and the base, and further needs to judge whether the contact area is kept unchanged within a preset time interval, if the contact area is kept unchanged within the preset time interval, it indicates that the user has an intention generation instruction for learning under the intelligent device 1, so that the camera 15 starts to work according to the instruction, and the lens of the camera 15 is adjusted to face the face of the user, a writing object and the environment where the user is located, so as to acquire corresponding image data.
Through this embodiment, can trigger camera 15 automatically and carry out work, do not need camera 15 to keep shooting state constantly to reduce power consumption, resources are saved.
Based on the above embodiment, the sensor array comprises a thermally sensitive infrared sensor array; the acquisition module 11 includes: a temperature point cloud obtaining unit; the control module 12 includes: a second calculating unit, a second judging unit and a second control unit;
the temperature point cloud acquisition unit is used for acquiring temperature point cloud data detected by the thermosensitive infrared sensor array;
the second calculation unit is connected with the temperature point cloud acquisition unit and used for calculating the contact area between the hand of the user and the base according to the temperature point cloud data;
the second judgment unit is connected with the second calculation unit and used for judging whether the contact area is kept unchanged within a preset time interval;
and the second control unit is connected with the second judging unit and used for triggering the camera 15 to shoot and acquire image data when the contact area is kept unchanged within a preset time interval.
Specifically, the heat-sensitive infrared sensor is a sensor that performs measurement using physical properties of infrared rays. Any substance, as long as it is itself above absolute zero, can radiate infrared rays. The thermosensitive infrared sensor comprises a thermistor, the temperature of the thermistor rises when the thermistor is radiated by infrared rays, the resistance changes, and the resistance is changed into an electric signal through a conversion circuit to be output, so that a corresponding temperature value is obtained through detection.
When a user presses the base, each thermosensitive infrared sensor in the thermosensitive infrared sensor array arranged at the base is influenced by the pressing intensity of the user, so that the thermosensitive infrared sensors obtain temperature values respectively detected, and a large number of temperature values detected by the plurality of thermosensitive infrared sensors at the same time obtain temperature point cloud data. Then, the intelligent device 1 calculates according to the temperature point cloud data to obtain a contact area between the body of the user and the base, and further needs to judge whether the contact area is kept unchanged within a preset time interval, if the contact area is kept unchanged within the preset time interval, it indicates that the user has an intention generation instruction for learning under the intelligent device 1, so that the camera 15 starts to work according to the instruction, and the lens of the camera 15 is adjusted to face the face of the user, a writing object and the environment where the user is located, so as to acquire corresponding image data.
Through this embodiment, can trigger camera 15 automatically and carry out work, do not need camera 15 to keep shooting state constantly to reduce power consumption, resources are saved.
Based on the above embodiment, the analysis module 13 includes: the device comprises an extraction unit, a scene type classification unit and an emotion type classification unit;
the extraction unit is used for carrying out image processing on the image data and extracting scene features, title types and facial features of users;
the scene type classification unit is connected with the extraction unit and is used for inputting the scene characteristics into the scene classification model to obtain the corresponding scene type;
and the emotion type classification unit is connected with the extraction unit and is used for inputting the facial features into the emotion classification model to obtain corresponding emotion types.
Specifically, when the camera 15 captures an image data, the camera 15 sends the image data to the processor, and after the processor receives the image data, the processor performs gray processing on the acquired image data, equalizes a histogram of the gray image, reduces the amount of information in the image, and accelerates the detection speed.
When the image data simultaneously comprises written content, the face of the user and the environment of the user, a face image, a title image and a scene image are extracted from the image data after image processing (namely gray scale and histogram equalization) by using a recognition algorithm. Specifically, the recognition algorithm for extracting the face image, the title image and the scene image from the image data includes: extraction based on geometric feature recognition, extraction of local feature analysis, extraction of neural networks, and the like. After the facial image is extracted, the facial features of the user are identified according to the extracted facial image, and then the facial features are input into the emotion classification model to obtain the corresponding emotion types. The facial features include: eyes, eyebrows, nose, mouth, environmental feature points of the outer contour of the face, wherein the environmental feature points of the eyes comprise the environmental feature points of the eyeballs. And identifying the subject types (the problem subject type and the simple subject type, wherein the problem subject type comprises an unqualified subject, a subject type, a knowledge point and the like, and the simple subject type comprises an unqualified subject, a subject type, a knowledge point and the like) according to the extracted subject images. Scene features (table and chair, blackboard, bed, bookshelf and the like) are identified according to the extracted scene images, and then the scene features are input into a scene classification model to obtain corresponding scene types (including classrooms, study rooms, libraries, playgrounds and the like). Similarly, when any piece of image data includes a plurality of contents, for example, when the image data includes both the face of the user and the environment where the user is located, a face image and a scene image are extracted from the image data after the image processing (i.e., gray scale and histogram equalization) by using a recognition algorithm.
When the first image data comprises writing contents, the second image data comprises a user face, and the third image data comprises an environment where the user is located, facial features of the user are identified according to the extracted facial image, and then the facial features are input into the emotion classification model to obtain corresponding emotion types. The facial features include: eyes, eyebrows, nose, mouth, environmental feature points of the outer contour of the face, wherein the environmental feature points of the eyes comprise the environmental feature points of the eyeballs. And identifying the subject types (the problem subject type and the simple subject type, wherein the problem subject type comprises an unqualified subject, a subject type, a knowledge point and the like, and the simple subject type comprises an unqualified subject, a subject type, a knowledge point and the like) according to the extracted subject images. Scene features (table and chair, blackboard, bed, bookshelf and the like) are identified according to the extracted scene images, and then the scene features are input into a scene classification model to obtain corresponding scene types (including classrooms, study rooms, libraries, playgrounds and the like).
The method for training to obtain the emotion classification model comprises the following steps: establishing a sample library with X face images, manually marking u face features in each face image to obtain a face feature training set and a face feature verification set, manually marking the face features to obtain an emotion type training set and an emotion type verification set corresponding to the face feature verification set, wherein the emotion type training set or the emotion type verification set respectively comprise difficulty-1, vitality-2, depression-3, anxiety-4, dislike-5, happiness-6, happiness-7 and the like. And taking the facial feature training set as input and the emotion type training set as output, and training the initial recognition model for multiple times to obtain an emotion classification model.
The method for training to obtain the scene classification model comprises the following steps: establishing a sample library with X scene images, manually marking u scene features in each scene image to obtain a scene feature training set and a scene feature verification set, manually marking according to the scene features to obtain a scene type training set and a scene type verification set corresponding to the scene feature verification set, wherein the scene type training set or the scene type verification set respectively comprise a classroom-1, a study-2, a library-3, a playground-4 and the like. And taking the scene characteristic training set as input and the scene type training set as output, and training the initial recognition model for multiple times to obtain a scene classification model.
In this embodiment, the trained emotion classification models are tested by using the facial feature verification set and the emotion type verification set corresponding to the facial feature verification set, and one recognition model with the best recognition effect among the trained emotion classification models is selected as the final emotion classification model, so that the recognition effect can be the highest classification accuracy, the fastest convergence rate and the like. And in the same way, obtaining the recognition model with the best recognition effect as the final scene classification model. And continuously updating and training the emotion classification model and the scene classification model in the subsequent use process, so that the accuracy of emotion judgment and scene judgment is improved.
According to the embodiment, the emotion type corresponding to the current emotion of the user, the scene type corresponding to the scene where the user is located and the question type corresponding to the learning content of the user at present can be automatically identified, so that the learning recommendation content matched with the emotion type and the scene type is intelligently recommended to the user according to the historical learning achievement list, the purpose of intelligent tutoring can be achieved, the appropriate learning content can be recommended to the user, the user can be guaranteed to complete learning of the corresponding knowledge point content under the appropriate environment and the appropriate emotion, the learning efficiency is improved, and the better learning effect is achieved.
Based on the above embodiment, the obtaining module 11 is further configured to obtain a learning content set and a historical learning achievement list of the user; the learning content set comprises a plurality of learning contents and a preset scene type corresponding to each learning content; the historical learning achievement list comprises learning achievement corresponding to each topic type;
the processing module 14 is connected with the obtaining module 11 and is further used for judging whether the theme type is matched with the emotion type; when the topic type is not matched with the emotion type, searching a candidate learning content set which is matched with the scene type and matched with the learning achievement and the emotion type according to the historical learning achievement list and the learning content set, and acquiring learning recommendation content matched with the emotion type from the candidate learning content set; when the theme type is matched with the emotion type, judging whether the theme type is matched with the scene type; and when the topic type does not match with the scene type, searching for the learning recommendation content matching with the scene type.
Specifically, a learning content set is obtained in advance by manually identifying the scene type of each learning content adapted to learning. For example, the type of the scene corresponding to reading the learning content is study room, playground or classroom, and the type of the scene corresponding to reading the learning content is study room, library, playground or classroom. The learning content set is established according to the method.
The user learns the image data obtained by shooting under the intelligent device 1, and analyzes the image data to obtain the current learning emotion, namely the emotion type, of the user, the question type corresponding to the current learning content of the user, and the scene type of the current environment where the user is located. And judging whether the topic type is matched with the emotion type, namely whether the problem topic type (the topic type which is not good at the user) corresponds to a positive emotion type (comprising positive emotion types such as happy and happy), and the simple topic type (the topic type which is good at the user) corresponds to a negative emotion type (comprising negative emotion types such as too much, angry, depression, worry and aversion).
When the topic type does not match the emotion type, for example, the problem type corresponds to a negative emotion type, it may be determined that the user is in a negative attitude for learning, and at this time, a first candidate learning content set matching the scene type (i.e., a candidate learning content set that is good for the user and matches the scene type) is generated in combination with the historical learning achievement list, and then a first learning recommendation content matching the negative emotion type is searched from the generated first candidate learning content set. Therefore, the user is in the passive emotion type and does not have the interest of attacking or exercising the unfair learning content, the learning efficiency of the user can be reduced due to the passive emotion of learning, the first learning recommendation content is recommended to the user by the intelligent device 1 at the moment, the user can keep the learning state, the learning confidence of the user can be improved due to the fact that the user learns the learning content which is adept by the user, the passive emotion type is adjusted to be the active emotion type, the learning content with high reinforcement score can be consolidated, the score can be further improved, and the learning effect and the learning score of the user can be improved.
When the topic type does not match the emotion type, for example, the simple topic type corresponds to an active emotion type, it may be determined that the user is in an active attitude for learning, and at this time, a second candidate learning content set matching the scene type (i.e., a candidate learning content set that is not good for the user and matches the scene type) is generated in combination with the historical learning achievement list, and then a second learning recommendation content matching the active emotion type is searched from the generated second candidate learning content set. Therefore, the user is in the positive emotion type, the user can have the interest of attacking or exercising the learning content which is not good at, the learning efficiency of the user can be improved actively due to the learning emotion, the second learning recommendation content is recommended to the user by the intelligent device 1, the knowledge points and the problem types with low training score can be strengthened by the user in the positive learning emotion state, and the learning effect and the learning score of the user are improved.
When the topic type is matched with the emotion type, for example, when the simple topic type corresponds to a negative emotion type, whether the topic type is matched with the scene type is judged, when the problem type corresponds to a positive emotion type, whether the topic type is matched with the scene type is further judged, and when the topic type is matched with the scene type, the user continues to use the learning content displayed by the writing object for learning. And when the theme type does not match with the scene type, searching third learning recommendation content matched with the scene type.
According to the embodiment, whether the question type of the current writing object of the user is matched with the scene type of the environment where the user is located and the current emotion type of the user are matched with the scene type of the environment where the user is located is judged, and corresponding learning recommendation content is generated according to the judgment result and the historical learning scene list, so that on one hand, the situation that the user is left in a negative attitude for learning and then a boring emotion is generated is avoided, the learning efficiency is improved, and therefore the learning score of the user is effectively adjusted. On the other hand, the method can help the user to learn in a targeted manner, and the learning interest of the user is improved. And finally, recommending proper learning recommendation contents for the user by combining the scene type, the topic type, the emotion type and the historical learning scene list of the current environment where the user is located, so that the user can be ensured to complete the learning of the corresponding learning contents in the proper environment and under the proper learning emotion, the learning efficiency is improved, and a better learning effect is achieved.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. A learning assistance method, comprising: the intelligent device comprises a camera, a bracket and a light-emitting piece; the bracket is provided with a base, and a sensor array is arranged at the base; the method comprises the following steps:
acquiring point cloud data detected by the sensor array;
triggering a camera to shoot and acquire image data when the point cloud data are judged to meet preset conditions;
analyzing the image data to obtain a scene type, a theme type and an emotion type;
and searching corresponding learning recommendation content according to the acquired historical learning achievement list, the theme type, the emotion type and the scene type, and recommending the learning recommendation content to a user.
2. The learning assistance method according to claim 1, wherein the sensor array includes a pressure sensor array; the step of acquiring the point cloud data detected by the sensor array specifically comprises the steps of:
acquiring pressure point cloud data detected by the pressure sensor array;
when the point cloud data are judged to accord with the preset conditions, triggering a camera to shoot and acquire image data specifically comprises the following steps:
calculating to obtain the contact area between the hand of the user and the base according to the pressure point cloud data;
judging whether the contact area is kept unchanged within a preset time interval;
and when the contact area is kept unchanged within a preset time interval, triggering the camera to shoot and acquire the image data.
3. The learning aid method according to claim 1, wherein the sensor array includes a thermosensitive infrared sensor array; the step of acquiring the point cloud data detected by the sensor array specifically comprises the steps of:
acquiring temperature point cloud data detected by the thermosensitive infrared sensor array;
when the point cloud data are judged to accord with the preset conditions, triggering a camera to shoot and acquire image data specifically comprises the following steps:
calculating to obtain the contact area between the hand of the user and the base according to the temperature point cloud data;
judging whether the contact area is kept unchanged within a preset time interval;
and when the contact area is kept unchanged within a preset time interval, triggering the camera to shoot and acquire the image data.
4. The learning aid method according to claim 1, wherein the analyzing the image data for a scene type, a topic type, and an emotion type specifically comprises the steps of:
performing image processing on the image data, and extracting scene features, title types and facial features of users;
inputting the scene features into a scene classification model to obtain corresponding scene types;
and inputting the facial features into a sentiment classification model to obtain a corresponding sentiment type.
5. The learning assistance method according to any one of claims 1 to 4, wherein the step of searching for corresponding learning recommendation content according to the acquired historical learning achievement list, the topic type, the emotion type, and the scene type, and recommending the learning recommendation content to the user specifically includes the steps of:
acquiring a learning content set and a historical learning score list of a user; the learning content set comprises a plurality of learning contents and a preset scene type corresponding to each learning content; the historical learning achievement list comprises learning achievement corresponding to each topic type;
judging whether the theme type is matched with the emotion type;
when the theme type is not matched with the emotion type, searching a candidate learning content set which is matched with the scene type and learning achievement and emotion type according to the historical learning achievement list and the learning content set, and acquiring learning recommendation content matched with the emotion type from the candidate learning content set;
when the theme type is matched with the emotion type, judging whether the theme type is matched with the scene type;
and when the theme type is not matched with the scene type, searching for the learning recommendation content matched with the scene type.
6. An intelligent device is characterized by comprising a camera, a bracket and a light-emitting piece; the bracket is provided with a base, and a sensor array is arranged at the base; further comprising: the device comprises an acquisition module, a control module, an analysis module and a processing module;
the acquisition module is used for acquiring point cloud data detected by the sensor array;
the control module is connected with the acquisition module and the camera and used for triggering the camera to shoot and acquire image data when the point cloud data is judged to meet the preset conditions;
the analysis module is connected with the control module and used for analyzing the image data to obtain a scene type, a theme type and an emotion type;
and the processing module is connected with the analysis module and used for searching corresponding learning recommendation content according to the acquired historical learning achievement list, the topic type, the emotion type and the scene type and recommending the learning recommendation content to a user.
7. The smart device of claim 6 wherein the sensor array comprises a pressure sensor array; the acquisition module includes: a pressure point cloud obtaining unit; the control module includes: the device comprises a first calculating unit, a first judging unit and a first control unit;
the pressure point cloud acquisition unit is used for acquiring pressure point cloud data detected by the pressure sensor array;
the first calculation unit is connected with the pressure point cloud acquisition unit and used for calculating the contact area between the hand of the user and the base according to the pressure point cloud data;
the first judging unit is connected with the first calculating unit and is used for judging whether the contact area is kept unchanged within a preset time interval;
the first control unit is connected with the first judging unit and used for triggering the camera to shoot and acquire the image data when the contact area is kept unchanged within a preset time interval.
8. The smart device of claim 6 wherein the sensor array comprises a thermally sensitive infrared sensor array; the acquisition module includes: a temperature point cloud obtaining unit; the control module includes: a second calculating unit, a second judging unit and a second control unit;
the temperature point cloud acquisition unit is used for acquiring temperature point cloud data detected by the thermosensitive infrared sensor array;
the second calculation unit is connected with the temperature point cloud acquisition unit and used for calculating the contact area between the hand of the user and the base according to the temperature point cloud data;
the second judging unit is connected with the second calculating unit and is used for judging whether the contact area is kept unchanged within a preset time interval;
the second control unit is connected with the second judging unit and used for triggering the camera to shoot and acquire the image data when the contact area is kept unchanged within a preset time interval.
9. The smart device of claim 6, wherein the analysis module comprises: the device comprises an extraction unit, a scene type classification unit and an emotion type classification unit;
the extraction unit is used for carrying out image processing on the image data and extracting scene features, title types and facial features of users;
the scene type classification unit is connected with the extraction unit and is used for inputting the scene features into a scene classification model to obtain corresponding scene types;
and the emotion type classification unit is connected with the extraction unit and is used for inputting the facial features into an emotion classification model to obtain corresponding emotion types.
10. The smart device of any one of claims 6-9, wherein:
the acquisition module is also used for acquiring a learning content set and a historical learning achievement list of the user; the learning content set comprises a plurality of learning contents and a preset scene type corresponding to each learning content; the historical learning achievement list comprises learning achievement corresponding to each topic type;
the processing module is connected with the acquisition module and is also used for judging whether the theme type is matched with the emotion type; when the theme type is not matched with the emotion type, searching a candidate learning content set which is matched with the scene type and learning achievement and emotion type according to the historical learning achievement list and the learning content set, and acquiring learning recommendation content matched with the emotion type from the candidate learning content set; when the theme type is matched with the emotion type, judging whether the theme type is matched with the scene type; and when the theme type is not matched with the scene type, searching for the learning recommendation content matched with the scene type.
CN201910508176.0A 2019-06-12 2019-06-12 Learning assisting method and intelligent device Active CN112084814B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910508176.0A CN112084814B (en) 2019-06-12 2019-06-12 Learning assisting method and intelligent device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910508176.0A CN112084814B (en) 2019-06-12 2019-06-12 Learning assisting method and intelligent device

Publications (2)

Publication Number Publication Date
CN112084814A true CN112084814A (en) 2020-12-15
CN112084814B CN112084814B (en) 2024-02-23

Family

ID=73733399

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910508176.0A Active CN112084814B (en) 2019-06-12 2019-06-12 Learning assisting method and intelligent device

Country Status (1)

Country Link
CN (1) CN112084814B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113010725A (en) * 2021-03-17 2021-06-22 平安科技(深圳)有限公司 Method, device, equipment and storage medium for selecting musical instrument

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160143422A (en) * 2015-06-05 2016-12-14 (주)인클라우드 Smart education system based on learner emotion
KR20170002100A (en) * 2015-06-29 2017-01-06 김영자 Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same
CN106682090A (en) * 2016-11-29 2017-05-17 上海智臻智能网络科技股份有限公司 Active interaction implementing device, active interaction implementing method and intelligent voice interaction equipment
CN206672401U (en) * 2017-03-30 2017-11-24 广州贝远信息技术有限公司 A kind of student individuality learning service system
CN109597943A (en) * 2018-12-17 2019-04-09 广东小天才科技有限公司 A kind of learning Content recommended method and facility for study based on scene

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160143422A (en) * 2015-06-05 2016-12-14 (주)인클라우드 Smart education system based on learner emotion
KR20170002100A (en) * 2015-06-29 2017-01-06 김영자 Method for providng smart learning education based on sensitivity avatar emoticon, and smart learning education device for the same
CN106682090A (en) * 2016-11-29 2017-05-17 上海智臻智能网络科技股份有限公司 Active interaction implementing device, active interaction implementing method and intelligent voice interaction equipment
CN206672401U (en) * 2017-03-30 2017-11-24 广州贝远信息技术有限公司 A kind of student individuality learning service system
CN109597943A (en) * 2018-12-17 2019-04-09 广东小天才科技有限公司 A kind of learning Content recommended method and facility for study based on scene

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113010725A (en) * 2021-03-17 2021-06-22 平安科技(深圳)有限公司 Method, device, equipment and storage medium for selecting musical instrument
CN113010725B (en) * 2021-03-17 2023-12-26 平安科技(深圳)有限公司 Musical instrument selection method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN112084814B (en) 2024-02-23

Similar Documents

Publication Publication Date Title
CN106355170B (en) Photo classification method and device
US8793118B2 (en) Adaptive multimodal communication assist system
CN105516280B (en) A kind of Multimodal Learning process state information packed record method
CN108399376A (en) Student classroom learning interest intelligent analysis method and system
WO2021077382A1 (en) Method and apparatus for determining learning state, and intelligent robot
TW201911127A (en) Intelligent robot and human-computer interaction method
KR101288447B1 (en) Gaze tracking apparatus, display apparatus and method therof
CN108491808B (en) Method and device for acquiring information
KR102351008B1 (en) Apparatus and method for recognizing emotions
Mariappan et al. Facefetch: A user emotion driven multimedia content recommendation system based on facial expression recognition
CN108647657A (en) A kind of high in the clouds instruction process evaluation method based on pluralistic behavior data
US20200089623A1 (en) Data sharing system and data sharing method therefor
Karanchery et al. Emotion recognition using one-shot learning for human-computer interactions
CN116704085B (en) Avatar generation method, apparatus, electronic device, and storage medium
CN112446017A (en) Light supplement control method, system, storage medium and computer equipment
CN112053205A (en) Product recommendation method and device through robot emotion recognition
Vasudevan et al. SL-Animals-DVS: event-driven sign language animals dataset
CN112084814A (en) Learning auxiliary method and intelligent device
Villegas-Ch et al. Identification of emotions from facial gestures in a teaching environment with the use of machine learning techniques
KR101950721B1 (en) Safety speaker with multiple AI module
CN111077993B (en) Learning scene switching method, electronic equipment and storage medium
WO2020175969A1 (en) Emotion recognition apparatus and emotion recognition method
KR102482841B1 (en) Artificial intelligence mirroring play bag
CN112017495A (en) Information interaction method and system based on learning terminal
CN115132027A (en) Intelligent programming learning system and method based on multi-mode deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant