CN114419956B - Physical programming method based on student portrait and related equipment - Google Patents

Physical programming method based on student portrait and related equipment Download PDF

Info

Publication number
CN114419956B
CN114419956B CN202111673969.1A CN202111673969A CN114419956B CN 114419956 B CN114419956 B CN 114419956B CN 202111673969 A CN202111673969 A CN 202111673969A CN 114419956 B CN114419956 B CN 114419956B
Authority
CN
China
Prior art keywords
student
target
virtual model
portrait
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111673969.1A
Other languages
Chinese (zh)
Other versions
CN114419956A (en
Inventor
王志芳
邹博
罗泽漩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Intellifusion Technologies Co Ltd
Original Assignee
Shenzhen Intellifusion Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Intellifusion Technologies Co Ltd filed Critical Shenzhen Intellifusion Technologies Co Ltd
Priority to CN202111673969.1A priority Critical patent/CN114419956B/en
Publication of CN114419956A publication Critical patent/CN114419956A/en
Application granted granted Critical
Publication of CN114419956B publication Critical patent/CN114419956B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/0053Computers, e.g. programming
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the invention provides a physical programming method based on student portraits, which comprises the steps of obtaining a first student portraits of a target student; generating a preset number of virtual models according to the first student portrait correspondence; selecting a target virtual model according to the eye attention degree of the target students to the virtual model; predicting a program block corresponding to the target virtual model according to the target virtual model and the second student portrait; and performing simulation programming demonstration on the virtual model according to the program blocks. The method comprises the steps of generating a plurality of virtual models which are possibly interesting for a target student through a first image of the target student, determining the target virtual model through the eye attention degree of the student, predicting a program block corresponding to the target virtual model according to the target virtual model and a second student image, performing simulation programming demonstration through the program block and the target virtual model, and improving the cognition degree of a new student on a physical component and the program block in the physical programming process, thereby improving the interest of the student on physical programming.

Description

Physical programming method based on student portrait and related equipment
Technical Field
The invention relates to the field of artificial intelligence and intelligent education, in particular to a physical programming method based on student portrait and related equipment.
Background
The physical programming is a teaching activity which can possibly cultivate the thinking ability of students, and is characterized in that various electronic components are fixed on plastic sheets (blocks) to form independent and spliced accessories, and the spliced accessories are spliced and combined like splicing blocks on an installation bottom plate of product configuration and matched with a computer for programming, so that the spliced physical model generates corresponding power. However, since the physical model needs to assemble the physical components to obtain the corresponding shapes, it is often difficult for new students to complete the corresponding physical programming with the physical components unfamiliar.
Disclosure of Invention
The embodiment of the invention provides a physical programming method based on student images and related equipment, wherein a plurality of virtual models which are possibly interested by a target student are generated through a first image of the target student, the target virtual model is determined through the eye attention degree of the student, a program block corresponding to the target virtual model is predicted according to the target virtual model and a second student image, and simulation programming demonstration is carried out through the program block and the target virtual model, so that the cognition degree of a new student on physical components and the program block in the physical programming process can be deepened, and the interest of the student on physical programming is promoted.
In a first aspect, an embodiment of the present invention provides a physical programming method based on student portrait, where the method includes:
acquiring a first student portrait of a target student, wherein the first student portrait is obtained according to basic information of the target student;
generating a preset number of virtual models according to the first student portraits, wherein the virtual models are constructed according to a physical model;
selecting a target virtual model according to the eye attention degree of the target student to the virtual model;
predicting a program block corresponding to the target virtual model according to the target virtual model and a second student portrait, wherein the second student portrait is obtained according to the behavior attribute information of the target student;
and carrying out simulation programming demonstration on the virtual model according to the program blocks.
Optionally, the generating a preset number of virtual models according to the first student portrait correspondence includes:
inputting the first student portrait into a preset first prediction model for processing, and outputting a preset number of virtual model identifiers through the first prediction model;
and generating a corresponding number of virtual models according to the virtual model identification.
Optionally, before the selecting the target virtual model according to the eye attention degree of the target student to the virtual model, the method includes:
acquiring a facial image of a virtual model for which the target student is facing;
and performing eye-catch detection on the facial image to obtain the attention degree of the target student to the eye-catch.
Optionally, before the program block corresponding to the target virtual model is predicted according to the target virtual model and the second student portrait, the second student portrait is obtained according to the behavior attribute information of the target student, the method further includes:
acquiring an image sequence of the target student in a preset time;
extracting behavior attribute information of the target students according to the image sequence;
and generating a second student portrait of the target student according to the behavior attribute information of the target student.
Optionally, the predicting the program block corresponding to the target virtual model according to the target virtual model and the second student portrait includes:
and inputting the target virtual model identification and the second student portrait into a preset second prediction model for processing, and outputting a program block corresponding to the target virtual model ID through the second prediction model.
Optionally, the performing, according to the program block, a simulation programming demonstration on the virtual model includes:
connecting the program blocks through a preset logic connection relation to obtain an operation program, and demonstrating the logic connection process;
and performing simulation operation on the virtual model through the operation program, and demonstrating an operation result.
Optionally, after the performing the simulation programming demonstration on the virtual model according to the program block, the method further includes:
and if a first trigger instruction of the target student for the simulation programming demonstration is received, pushing the physical component position of the virtual model to the target student.
In a second aspect, an embodiment of the present invention provides a physical programming device based on student portrait, the device including:
the first acquisition module is used for acquiring a first student portrait of a target student, and the first student portrait is obtained according to basic information of the target student;
the construction module is used for correspondingly generating a preset number of virtual models according to the first student portrait, and the virtual models are constructed according to the physical model;
the selecting module is used for selecting a target virtual model according to the eye attention degree of the target student on the virtual model;
the corresponding module is used for predicting a program block corresponding to the target virtual model according to the target virtual model and a second student portrait, and the second student portrait is obtained according to the behavior attribute information of the target student;
and the demonstration module is used for carrying out simulation programming demonstration on the virtual model according to the program blocks.
In a third aspect, an embodiment of the present invention provides an electronic device, including: the system comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the steps in the physical programming method based on student images provided by the embodiment of the invention when executing the computer program.
In a fourth aspect, an embodiment of the present invention provides a computer readable storage medium, where a computer program is stored, where the computer program when executed by a processor implements steps in a physical programming method based on student images provided in the embodiment of the present invention.
In the embodiment of the invention, a first student portrait of a target student is acquired, and the first student portrait is obtained according to basic information of the target student; generating a preset number of virtual models according to the first student portraits, wherein the virtual models are constructed according to a physical model; selecting a target virtual model according to the eye attention degree of the target student to the virtual model; predicting a program block corresponding to the target virtual model according to the target virtual model and a second student portrait, wherein the second student portrait is obtained according to the behavior attribute information of the target student; and carrying out simulation programming demonstration on the virtual model according to the program blocks. The method comprises the steps of generating a plurality of virtual models which are possibly interesting for a target student through a first image of the target student, determining the target virtual model through the eye attention degree of the student, predicting a program block corresponding to the target virtual model according to the target virtual model and a second student image, performing simulation programming demonstration through the program block and the target virtual model, and improving the cognition degree of a new student on a physical component and the program block in the physical programming process, thereby improving the interest of the student on physical programming.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a physical programming method based on student portraits provided by an embodiment of the invention;
FIG. 2 is a schematic structural diagram of a physical programming device based on student portrait according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, fig. 1 is a flowchart of a physical programming method based on student images according to an embodiment of the present invention, as shown in fig. 1, the physical programming method based on student images includes the following steps:
101. a first student representation of a target student is acquired.
In the embodiment of the invention, the first student portrait is obtained according to the basic information of the target student. The basic information of the students can be filled in by the students or parents according to templates. The target student is a new student.
In the embodiment of the invention, the scene of the physical programming is a programming classroom, and the programming classroom comprises a shelf for containing physical components and a computer for programming, a display screen for demonstration, wherein the display screen is provided with a camera, and students in front of the display screen can be shot, so that images of the students are obtained. The programming classroom is also provided with an image device for sampling the students' moving pictures in the classroom.
102. And correspondingly generating a preset number of virtual models according to the first student portrait.
In the embodiment of the invention, the virtual model is constructed according to the physical model, and specifically, the physical model can be scanned, so that the virtual model of the physical model is obtained.
The virtual model can be displayed through the display screen, and the target students can observe the virtual model through the display screen.
Optionally, in the step of generating the preset number of virtual models according to the correspondence of the first student portrait, the first student portrait may be input into a preset first prediction model for processing, and the preset number of virtual model identifiers are output through the first prediction model; and generating a corresponding number of virtual models according to the virtual model identification.
In the embodiment of the present invention, the first prediction model may be a prediction model constructed based on a convolutional neural network, where the number of output classes of the first prediction model is the number of all virtual models, and each virtual model corresponds to one output class.
Further, each virtual model corresponds to a virtual model identifier, and the virtual model identifier is used as the output type of the first prediction model.
103. And selecting a target virtual model according to the eye attention degree of the target student on the virtual model.
In the embodiment of the invention, the eye attention degree of the target students for each virtual model is obtained by eye tracking of the target students.
Optionally, before the step of selecting the target virtual model according to the eye attention degree of the target student for the virtual model, a face image of the virtual model for which the target student is facing may be acquired; and performing eye-mind detection on the facial image to obtain the attention degree of the target student to the eye-mind.
In the embodiment of the invention, the face image sequence of the target student facing the virtual model can be adopted through a camera arranged on the display screen. The eye detection can extract the motion trail of the pupil characteristic points through a circular convolution neural network, and takes the virtual model with the longest residence time of the pupil characteristic points as a target virtual model.
104. And predicting the program block corresponding to the target virtual model according to the target virtual model and the second student portrait.
In the embodiment of the invention, the second student portrait is obtained according to the behavior attribute information of the target student.
Specifically, image acquisition can be performed on the target students through cameras arranged in a programming classroom, so that image sequences of the target students are obtained, and behavior attribute information of the target students is extracted according to the image sequences of the target students.
Optionally, before the step of predicting the program block corresponding to the target virtual model according to the target virtual model and the second student portrait, an image sequence of the target student in a preset time may be obtained; extracting behavior attribute information of a target student according to the image sequence; and generating a second student portrait of the target student according to the behavior attribute information of the target student.
Specifically, the behavior detection can be performed on the image sequence of the target student in the preset time through a behavior recognition model or a time sequence behavior detection model, so as to obtain the behavior attribute information of the target student.
The predetermined time may be a period of time, such as 5 minutes, after the student enters the programming classroom. The behavior attribute information may include walking, alarming, opening, running, squatting, etc.
The different behavioral attribute information corresponds to a different second student representation, such as physical programming in which lively students like to move.
Optionally, in the step of predicting the program block corresponding to the target virtual model according to the target virtual model and the second student portrait, the target virtual model identifier and the second student portrait may be input into a preset second prediction model for processing, and the program block corresponding to the target virtual model ID may be output through the second prediction model.
In the embodiment of the present invention, the second prediction model may be a prediction model constructed based on a convolutional neural network, where the number of output classes of the second prediction model is the number of programs supported by the target virtual model, and each program corresponds to one output class.
Further, each program corresponds to a program identifier, and the program identifier is used as the output type of the first prediction model.
105. And performing simulation programming demonstration on the virtual model according to the program blocks.
In an embodiment of the present invention, the program blocks may be modularized instructions, such as straight program instructions, which are packaged in a "straight" module to form corresponding program blocks.
Optionally, connecting the program blocks through a preset logic connection relationship to obtain an operation program and demonstrating a logic connection process; and performing simulation operation on the virtual model through an operation program, and demonstrating an operation result.
Optionally, after performing simulation programming demonstration on the virtual model according to the program block, if a first trigger instruction of the target student for the simulation programming demonstration is received, pushing the physical component position of the virtual model to the target student.
After the simulation programming demonstration is carried out on the virtual model according to the program blocks, a query about whether to carry out physical programming or not can be sent to the display screen, if the target student selects the physical programming demonstration, a first trigger instruction is generated, and the physical component position of the target virtual model is pushed to the target student. The physical component position refers to the position of the physical component in the classroom. If the target student selects no, a second trigger instruction is generated, and the demonstration is ended.
In the embodiment of the invention, a first student portrait of a target student is acquired, and the first student portrait is obtained according to basic information of the target student; generating a preset number of virtual models according to the first student portraits, wherein the virtual models are constructed according to a physical model; selecting a target virtual model according to the eye attention degree of the target student to the virtual model; predicting a program block corresponding to the target virtual model according to the target virtual model and a second student portrait, wherein the second student portrait is obtained according to the behavior attribute information of the target student; and carrying out simulation programming demonstration on the virtual model according to the program blocks. The method comprises the steps of generating a plurality of virtual models which are possibly interesting for a target student through a first image of the target student, determining the target virtual model through the eye attention degree of the student, predicting a program block corresponding to the target virtual model according to the target virtual model and a second student image, performing simulation programming demonstration through the program block and the target virtual model, and improving the cognition degree of a new student on a physical component and the program block in the physical programming process, thereby improving the interest of the student on physical programming.
It should be noted that, the physical programming method based on student portrait provided by the embodiment of the invention can be applied to devices such as smart phones, computers, servers and the like which can carry out physical programming based on student portrait.
Optionally, referring to fig. 2, fig. 2 is a schematic structural diagram of a physical programming device based on student portrait according to an embodiment of the present invention, as shown in fig. 2, the device includes:
a first acquisition module 201, configured to acquire a first student portrait of a target student, where the first student portrait is obtained according to basic information of the target student;
a construction module 202, configured to correspondingly generate a preset number of virtual models according to the first student portrait, where the virtual models are constructed according to a physical model;
the selecting module 203 is configured to select a target virtual model according to the eye attention degree of the target student for the virtual model;
a corresponding module 204, configured to predict a program block corresponding to the target virtual model according to the target virtual model and a second student portrait, where the second student portrait is obtained according to behavior attribute information of the target student;
and the demonstration module 205 is used for carrying out simulation programming demonstration on the virtual model according to the program blocks.
Optionally, the building module 202 includes:
the first processing submodule is used for inputting the first student portrait into a preset first prediction model for processing, and outputting a preset number of virtual model identifiers through the first prediction model;
and the generation sub-module is used for generating a corresponding number of virtual models according to the virtual model identification.
Optionally, before generating the sub-module, the method includes:
an acquisition sub-module, configured to acquire a face image of a virtual model for which the target student is facing;
and the detection sub-module is used for carrying out eye detection on the facial image to obtain the attention degree of the target student to the eye.
Optionally, before the corresponding module 204, the apparatus further includes:
the second acquisition module is used for acquiring an image sequence of the target student in a preset time;
the extraction module is used for extracting the behavior attribute information of the target students according to the image sequence;
and the generation module is used for generating a second student portrait of the target student according to the behavior attribute information of the target student.
Optionally, the corresponding module 204 includes:
and the second processing sub-module is used for inputting the target virtual model identification and the second student portrait into a preset second prediction model for processing, and outputting a program block corresponding to the target virtual model ID through the second prediction model.
Optionally, the presentation module 205 includes:
the connection sub-module is used for connecting the program blocks through a preset logic connection relation to obtain an operation program and demonstrating the logic connection process;
and the demonstration sub-module is used for carrying out simulation operation on the virtual model through the operation program and demonstrating the operation result.
Optionally, after the demonstration module 205, the apparatus further includes:
and the pushing module is used for pushing the physical component position of the virtual model to the target student if the first trigger instruction of the target student for the simulation programming demonstration is received.
The embodiment of the invention provides a physical programming device based on student images, which can be applied to devices such as a smart phone, a computer, a server and the like which can carry out physical programming based on student images.
In the embodiment of the invention, a first student portrait of a target student is acquired, and the first student portrait is obtained according to basic information of the target student; generating a preset number of virtual models according to the first student portraits, wherein the virtual models are constructed according to a physical model; selecting a target virtual model according to the eye attention degree of the target student to the virtual model; predicting a program block corresponding to the target virtual model according to the target virtual model and a second student portrait, wherein the second student portrait is obtained according to the behavior attribute information of the target student; and carrying out simulation programming demonstration on the virtual model according to the program blocks. The method comprises the steps of generating a plurality of virtual models which are possibly interesting for a target student through a first image of the target student, determining the target virtual model through the eye attention degree of the student, predicting a program block corresponding to the target virtual model according to the target virtual model and a second student image, performing simulation programming demonstration through the program block and the target virtual model, and improving the cognition degree of a new student on a physical component and the program block in the physical programming process, thereby improving the interest of the student on physical programming.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, as shown in fig. 3, including: memory 302, processor 301 and a computer program stored on said memory 302 and executable on said processor 301 for a student image based physical programming method, wherein:
the processor 301 is configured to call a computer program stored in the memory 302, and perform the following steps:
acquiring a first student portrait of a target student, wherein the first student portrait is obtained according to basic information of the target student;
generating a preset number of virtual models according to the first student portraits, wherein the virtual models are constructed according to a physical model;
selecting a target virtual model according to the eye attention degree of the target student to the virtual model;
predicting a program block corresponding to the target virtual model according to the target virtual model and a second student portrait, wherein the second student portrait is obtained according to the behavior attribute information of the target student;
and carrying out simulation programming demonstration on the virtual model according to the program blocks.
Optionally, the generating, by the processor 301, a preset number of virtual models according to the first student portrait correspondence includes:
inputting the first student portrait into a preset first prediction model for processing, and outputting a preset number of virtual model identifiers through the first prediction model;
and generating a corresponding number of virtual models according to the virtual model identification.
Optionally, before the selecting the target virtual model according to the eye attention degree of the target student for the virtual model, the processor 301 includes:
acquiring a facial image of a virtual model for which the target student is facing;
and performing eye-catch detection on the facial image to obtain the attention degree of the target student to the eye-catch.
Optionally, before the predicting, by the processor 301, a block corresponding to the target virtual model according to the target virtual model and a second student portrait, where the second student portrait is obtained according to behavior attribute information of the target student, the method further includes:
acquiring an image sequence of the target student in a preset time;
extracting behavior attribute information of the target students according to the image sequence;
and generating a second student portrait of the target student according to the behavior attribute information of the target student.
Optionally, the predicting, by the processor 301, the block corresponding to the target virtual model according to the target virtual model and the second student portrait includes:
and inputting the target virtual model identification and the second student portrait into a preset second prediction model for processing, and outputting a program block corresponding to the target virtual model ID through the second prediction model.
Optionally, the performing, by the processor 301, the simulation programming demonstration on the virtual model according to the program block includes:
connecting the program blocks through a preset logic connection relation to obtain an operation program, and demonstrating the logic connection process;
and performing simulation operation on the virtual model through the operation program, and demonstrating an operation result.
Optionally, after the performing, by the processor 301, the simulation program demonstration on the virtual model according to the program block, the method further includes:
and if a first trigger instruction of the target student for the simulation programming demonstration is received, pushing the physical component position of the virtual model to the target student.
In the embodiment of the invention, a first student portrait of a target student is acquired, and the first student portrait is obtained according to basic information of the target student; generating a preset number of virtual models according to the first student portraits, wherein the virtual models are constructed according to a physical model; selecting a target virtual model according to the eye attention degree of the target student to the virtual model; predicting a program block corresponding to the target virtual model according to the target virtual model and a second student portrait, wherein the second student portrait is obtained according to the behavior attribute information of the target student; and carrying out simulation programming demonstration on the virtual model according to the program blocks. The method comprises the steps of generating a plurality of virtual models which are possibly interesting for a target student through a first image of the target student, determining the target virtual model through the eye attention degree of the student, predicting a program block corresponding to the target virtual model according to the target virtual model and a second student image, performing simulation programming demonstration through the program block and the target virtual model, and improving the cognition degree of a new student on a physical component and the program block in the physical programming process, thereby improving the interest of the student on physical programming.
The embodiment of the invention also provides a computer readable storage medium, and a computer program is stored on the computer readable storage medium, and when the computer program is executed by a processor, the method for programming a physical object based on a student image or the method for programming a physical object based on a student image applied to the application terminal provided by the embodiment of the invention can achieve the same technical effect, and is not repeated here.
Those skilled in the art will appreciate that implementing all or part of the above-described methods in accordance with the embodiments may be accomplished by way of a computer program stored on a computer readable storage medium, which when executed may comprise the steps of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM) or the like.
The foregoing disclosure is illustrative of the present invention and is not to be construed as limiting the scope of the invention, which is defined by the appended claims.

Claims (10)

1. A physical programming method based on student portrait is characterized by comprising the following steps:
acquiring a first student portrait of a target student, wherein the first student portrait is obtained according to basic information of the target student; the target student is a new student;
generating a preset number of virtual models according to the first student portraits, wherein the virtual models are constructed according to a physical model;
selecting a target virtual model according to the eye attention degree of the target student to the virtual model;
predicting a program block corresponding to the target virtual model according to the target virtual model and a second student portrait, wherein the second student portrait is obtained according to the behavior attribute information of the target student;
and carrying out simulation programming demonstration on the virtual model according to the program blocks.
2. The method of claim 1, wherein the generating a predetermined number of virtual models from the first student representation correspondence comprises:
inputting the first student portrait into a preset first prediction model for processing, and outputting a preset number of virtual model identifiers through the first prediction model;
and generating a corresponding number of virtual models according to the virtual model identification.
3. The method of claim 1, comprising, prior to said selecting a target virtual model based on the eye concentration of said target student with respect to the virtual model:
acquiring a face image of the target student facing the virtual model;
and performing eye-catch detection on the facial image to obtain the eye-catch attention degree of the target student on the virtual model.
4. The method of claim 1, wherein prior to predicting a block corresponding to the target virtual model from the target virtual model and a second student representation, the second student representation being derived from behavioral attribute information of the target student, the method further comprises:
acquiring an image sequence of the target student in a preset time;
extracting behavior attribute information of the target students according to the image sequence;
and generating a second student portrait of the target student according to the behavior attribute information of the target student.
5. The method of claim 2, wherein predicting a block corresponding to the target virtual model based on the target virtual model and a second student representation comprises:
and inputting the virtual model identification of the target virtual model and the second student portrait into a preset second prediction model for processing, and outputting a program block corresponding to the target virtual model through the second prediction model.
6. The method of claim 1, wherein said performing a simulation programming demonstration of said virtual model from said program block comprises:
connecting the program blocks through a preset logic connection relation to obtain an operation program, and demonstrating the logic connection process;
and performing simulation operation on the virtual model through the operation program, and demonstrating an operation result.
7. The method of claim 1, wherein after said performing a simulation programming demonstration of said virtual model in accordance with said program block, said method further comprises:
and if a first trigger instruction of the target student for the simulation programming demonstration is received, pushing the physical component position of the virtual model to the target student.
8. A physical programming device based on student portraits, the device comprising:
the first acquisition module is used for acquiring a first student portrait of a target student, and the first student portrait is obtained according to basic information of the target student; the target student is a new student;
the construction module is used for correspondingly generating a preset number of virtual models according to the first student portrait, and the virtual models are constructed according to the physical model;
the selecting module is used for selecting a target virtual model according to the eye attention degree of the target student on the virtual model;
the corresponding module is used for predicting a program block corresponding to the target virtual model according to the target virtual model and a second student portrait, and the second student portrait is obtained according to the behavior attribute information of the target student;
and the demonstration module is used for carrying out simulation programming demonstration on the virtual model according to the program blocks.
9. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps in the student representation-based physical programming method of any one of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, which when executed by a processor, implements the steps of the student representation-based physical programming method of any one of claims 1 to 7.
CN202111673969.1A 2021-12-31 2021-12-31 Physical programming method based on student portrait and related equipment Active CN114419956B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111673969.1A CN114419956B (en) 2021-12-31 2021-12-31 Physical programming method based on student portrait and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111673969.1A CN114419956B (en) 2021-12-31 2021-12-31 Physical programming method based on student portrait and related equipment

Publications (2)

Publication Number Publication Date
CN114419956A CN114419956A (en) 2022-04-29
CN114419956B true CN114419956B (en) 2024-01-16

Family

ID=81270974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111673969.1A Active CN114419956B (en) 2021-12-31 2021-12-31 Physical programming method based on student portrait and related equipment

Country Status (1)

Country Link
CN (1) CN114419956B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108279878A (en) * 2017-12-20 2018-07-13 中国科学院软件研究所 A kind of material object programming method and system based on augmented reality
CN110189567A (en) * 2019-05-08 2019-08-30 上海飒智智能科技有限公司 A kind of the industrial robot training system and Training Methodology of actual situation combination
US10600335B1 (en) * 2017-09-18 2020-03-24 Architecture Technology Corporation Adaptive team training evaluation system and method
CN110969682A (en) * 2019-11-27 2020-04-07 深圳追一科技有限公司 Virtual image switching method and device, electronic equipment and storage medium
CN111028597A (en) * 2019-12-12 2020-04-17 塔普翊海(上海)智能科技有限公司 Mixed reality foreign language scene, environment and teaching aid teaching system and method thereof
CN112017488A (en) * 2020-08-28 2020-12-01 济南浪潮高新科技投资发展有限公司 AR-based education robot system and learning method
CN112131965A (en) * 2020-08-31 2020-12-25 深圳云天励飞技术股份有限公司 Human body posture estimation method and device, electronic equipment and storage medium
CN112700523A (en) * 2020-12-31 2021-04-23 魔珐(上海)信息科技有限公司 Virtual object face animation generation method and device, storage medium and terminal
JP2021124520A (en) * 2020-01-31 2021-08-30 株式会社ジョリーグッド Image display device, program for image display, and image display method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030174147A1 (en) * 2001-08-13 2003-09-18 David Jaffe Device, system and method for simulating a physical system
US7650589B2 (en) * 2003-08-15 2010-01-19 National Instruments Corporation Signal analysis function blocks and method of use
US8655635B2 (en) * 2011-09-09 2014-02-18 National Instruments Corporation Creating and controlling a model of a sensor device for a computer simulation
US11514507B2 (en) * 2020-03-03 2022-11-29 International Business Machines Corporation Virtual image prediction and generation

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10600335B1 (en) * 2017-09-18 2020-03-24 Architecture Technology Corporation Adaptive team training evaluation system and method
CN108279878A (en) * 2017-12-20 2018-07-13 中国科学院软件研究所 A kind of material object programming method and system based on augmented reality
CN110189567A (en) * 2019-05-08 2019-08-30 上海飒智智能科技有限公司 A kind of the industrial robot training system and Training Methodology of actual situation combination
CN110969682A (en) * 2019-11-27 2020-04-07 深圳追一科技有限公司 Virtual image switching method and device, electronic equipment and storage medium
CN111028597A (en) * 2019-12-12 2020-04-17 塔普翊海(上海)智能科技有限公司 Mixed reality foreign language scene, environment and teaching aid teaching system and method thereof
JP2021124520A (en) * 2020-01-31 2021-08-30 株式会社ジョリーグッド Image display device, program for image display, and image display method
CN112017488A (en) * 2020-08-28 2020-12-01 济南浪潮高新科技投资发展有限公司 AR-based education robot system and learning method
CN112131965A (en) * 2020-08-31 2020-12-25 深圳云天励飞技术股份有限公司 Human body posture estimation method and device, electronic equipment and storage medium
CN112700523A (en) * 2020-12-31 2021-04-23 魔珐(上海)信息科技有限公司 Virtual object face animation generation method and device, storage medium and terminal

Also Published As

Publication number Publication date
CN114419956A (en) 2022-04-29

Similar Documents

Publication Publication Date Title
Oufqir et al. ARKit and ARCore in serve to augmented reality
Sannikov et al. Interactive educational content based on augmented reality and 3D visualization
KR20210110620A (en) Interaction methods, devices, electronic devices and storage media
US11853895B2 (en) Mirror loss neural networks
CN111414506B (en) Emotion processing method and device based on artificial intelligence, electronic equipment and storage medium
CN110992222A (en) Teaching interaction method and device, terminal equipment and storage medium
CN111667420B (en) Image processing method and device
CN113792871A (en) Neural network training method, target identification method, device and electronic equipment
CN112528768A (en) Action processing method and device in video, electronic equipment and storage medium
CN111954087B (en) Method and device for intercepting images in video, storage medium and electronic equipment
CN113127682A (en) Topic presentation method, system, electronic device, and computer-readable storage medium
CN114064974A (en) Information processing method, information processing apparatus, electronic device, storage medium, and program product
EP4171045A1 (en) Production method and device for multimedia works, and computer-readable storage medium
CN114419956B (en) Physical programming method based on student portrait and related equipment
CN112070901A (en) AR scene construction method and device for garden, storage medium and terminal
CN109816744B (en) Neural network-based two-dimensional special effect picture generation method and device
CN116309997A (en) Digital human action generation method, device and equipment
CN112511853B (en) Video processing method and device, electronic equipment and storage medium
CN111860206B (en) Image acquisition method and device, storage medium and intelligent equipment
CN114550545A (en) Course generation method, course display method and device
CN114177621B (en) Data processing method and device
KR102575820B1 (en) Digital actor management system for exercise trainer
CN115100581B (en) Video reconstruction model training method and device based on text assistance
CN112699263B (en) AI-based two-dimensional art image dynamic display method and device
CN115223424A (en) Programming method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant