CN112102436A - Game learning scene making method and system - Google Patents
Game learning scene making method and system Download PDFInfo
- Publication number
- CN112102436A CN112102436A CN202011018195.4A CN202011018195A CN112102436A CN 112102436 A CN112102436 A CN 112102436A CN 202011018195 A CN202011018195 A CN 202011018195A CN 112102436 A CN112102436 A CN 112102436A
- Authority
- CN
- China
- Prior art keywords
- line segment
- target
- contour
- node
- picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/20—Drawing from basic elements, e.g. lines or circles
- G06T11/203—Drawing of straight lines or curves
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a game learning scene making method and a game learning scene making system, wherein the method comprises the following steps: displaying an electronic drawing board; receiving lines drawn on the electronic drawing board by a user to form a picture; analyzing lines in the picture, and determining the outline of a target object formed by the lines, wherein the target object comprises any one object or a plurality of objects in target characters, target objects and target organisms; outlining the target object in the drawing; generating a preset expression form of the target object according to the contour of the target object; and outputting a preset expression form of the target object.
Description
Technical Field
The invention relates to the technical field of intellectualization, in particular to a method and a system for making a game-based learning scene.
Background
At present, the intelligent technology is applied to the teaching process, the teaching quality is improved, and the technology popularizing trend of various schools and training institutions is formed.
The development of creativity is always the key point in the aspect of children education. For example, there is a joint idea in painting, and a part capable of forming painting is found out by joining unordered graphics. For example, a rabbit-like cloud may be associated with the rabbit and the rabbit may be drawn by drawing a picture of the cloud. However, the joint ideas are that the teacher is on site to guide the students to associate, and the students cannot be pointed out without the teacher.
Disclosure of Invention
The embodiment of the invention provides a method and a system for making a game-oriented learning scene, which are used for intelligently generating a game-oriented learning association drawing scene without on-site and in-person teaching of teachers, and improving the learning efficiency of association drawing.
The embodiment of the invention provides a method for making a game-based learning scene, which comprises the following steps:
displaying an electronic drawing board;
receiving lines drawn on the electronic drawing board by a user to form a picture;
analyzing lines in the picture, and determining the outline of a target object formed by the lines, wherein the target object comprises any one object or a plurality of objects in target characters, target objects and target organisms;
outlining the target object in the drawing;
generating a preset expression form of the target object according to the contour of the target object;
and outputting a preset expression form of the target object.
In one embodiment, the analyzing the lines in the drawing to determine the outline of the target object formed by the lines comprises:
determining all nodes in the picture, wherein the nodes are the intersection points of at least two lines;
determining line segment units in the picture, wherein the line segment units comprise dual-node line segment units and/or single-node line segment units, the dual-node line segment units are two nodes connected by line segments and the line segments connecting the two nodes, and no other node exists on the line segments; the line section is one of the lines; the single-node line segment unit is an end point or a starting point of which one end is a node and the other end is a line segment;
identifying all target line segment units in the picture in the preset direction; the tail node of the last target line segment unit is the head node of the next target line segment unit, and all the target line segment units in the preset direction comprise a first part of target line segment units forming a closed contour;
determining a first target contour matched with the first closed contour in a contour library according to the contour library in which respective contours of a plurality of objects are prestored and the first closed contour;
when a first target contour matching the first closed contour exists in the contour library, the first closed contour is regarded as the contour of the target object.
In one embodiment, the determining a target contour in the contour library matching the closed contour according to a contour library pre-storing respective contours of a plurality of objects and the closed contour includes:
determining all nodes in the picture, wherein the nodes are the intersection points of at least two lines;
determining line segment units in a picture, wherein the line segment units comprise double-node line segment units and/or single-node line segment units, the double-node line segment units comprise two nodes connected by line segments and line segments connecting the two nodes, and no other node exists on the line segments; the line section is one of the lines; the single-node line segment unit is an end point or a starting point of which one end is a node and the other end is a line segment;
identifying all target line segment units in the picture in a preset direction; the tail node of the last target line segment unit is the head node of the next target line segment unit, and all the target line segment units in the preset direction comprise a first part of target line segment units forming a closed contour; the first partial target line segment unit comprises a single-node line segment unit sharing a node with at least one target line segment unit in the first partial target line segment unit;
regarding an end point or a start point of a line segment in a single node line segment unit in the second partial target line unit as a free node;
connecting a free node of a single-node line segment unit in the second part of target line units with a first node of a next target line segment unit of the single-node line segment unit along the preset direction through a line with a preset shape, so that all target line segment units in the preset direction form a second closed contour;
determining a second target contour matched with the second closed contour in the contour library according to a contour library pre-stored with respective contours of a plurality of objects and the second closed contour;
the second target contour is considered as a contour of the target object.
In one embodiment, the line of the preset shape includes a curved line or a straight line.
In one embodiment, the generating a preset expression form of the target object according to the contour of the target object includes:
acquiring a pre-stored target object picture corresponding to the first target contour;
the outputting of the preset expression form of the target object comprises:
covering the target object picture on a line corresponding to the outline of the target object in the picture to form the covered picture;
and outputting the covered picture.
In one embodiment, the generating a preset expression form of the target object according to the contour of the target object includes:
acquiring a pre-stored target object picture corresponding to the second target contour;
the outputting of the preset expression form of the target object comprises:
covering the target object picture on a line corresponding to the outline of the target object in the picture to form the covered picture;
and outputting the covered picture.
The embodiment of the invention provides a game learning scene making system, which comprises:
the display module is used for displaying an electronic drawing board;
the receiving module is used for receiving lines drawn on the electronic drawing board by a user to form a picture;
the analysis module is used for analyzing lines in the picture and determining the outline of a target object formed by the lines, wherein the target object comprises any one object or a plurality of objects in target characters, target objects and target organisms;
a delineation module for delineating the contour of the target object in the drawing;
the production module is used for generating a preset expression form of the target object according to the contour of the target object;
and the output module is used for outputting the preset expression form of the target object.
In one embodiment, the analysis module is further configured to:
determining all nodes in the picture, wherein the nodes are the intersection points of at least two lines;
determining line segment units in the picture, wherein the line segment units comprise dual-node line segment units and/or single-node line segment units, the dual-node line segment units are two nodes connected by line segments and the line segments connecting the two nodes, and no other node exists on the line segments; the line section is one of the lines; the single-node line segment unit is an end point or a starting point of which one end is a node and the other end is a line segment;
identifying all target line segment units in the picture in the preset direction; the tail node of the last target line segment unit is the head node of the next target line segment unit, and all the target line segment units in the preset direction comprise a first part of target line segment units forming a closed contour;
determining a first target contour matched with the first closed contour in a contour library according to the contour library in which respective contours of a plurality of objects are prestored and the first closed contour;
the first target contour is considered as a contour of the target object.
In one embodiment, the determining a target contour in the contour library matching the closed contour according to a contour library pre-storing respective contours of a plurality of objects and the closed contour includes:
determining all nodes in the picture, wherein the nodes are the intersection points of at least two lines;
determining line segment units in a picture, wherein the line segment units comprise double-node line segment units and/or single-node line segment units, the double-node line segment units comprise two nodes connected by line segments and line segments connecting the two nodes, and no other node exists on the line segments; the line section is one of the lines; the single-node line segment unit is an end point or a starting point of which one end is a node and the other end is a line segment;
identifying all target line segment units in the picture in a preset direction; the tail node of the last target line segment unit is the head node of the next target line segment unit, and all the target line segment units in the preset direction comprise a first part of target line segment units forming a closed contour; the first partial target line segment unit comprises a single-node line segment unit sharing a node with at least one target line segment unit in the first partial target line segment unit;
regarding an end point or a start point of a line segment in a single node line segment unit in the second partial target line unit as a free node;
connecting a free node of a single-node line segment unit in the second part of target line units with a first node of a next target line segment unit of the single-node line segment unit along the preset direction through a line with a preset shape, so that all target line segment units in the preset direction form a second closed contour;
determining a second target contour matched with the second closed contour in the contour library according to a contour library pre-stored with respective contours of a plurality of objects and the second closed contour;
the second target contour is considered as a contour of the target object.
In one embodiment, the generating a preset expression form of the target object according to the contour of the target object includes:
acquiring a pre-stored target object picture corresponding to the first target contour;
the outputting of the preset expression form of the target object comprises:
covering the target object picture on a line corresponding to the outline of the target object in the picture to form the covered picture;
and outputting the covered picture.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
FIG. 1 is a flow chart of a method for creating a game-based learning scenario according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a screen of a method for creating a game learning scene according to an embodiment of the present invention;
FIG. 3 is a schematic view of another exemplary view of a method for creating a game-based learning scene according to an embodiment of the present invention;
fig. 4 is a schematic view of another screen of a method for making a game-based learning scene according to an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
The embodiment of the invention provides a method for making a game-based learning scene, which comprises the following steps of S1-S6:
and step S1, displaying an electronic drawing board.
Step S2, receiving lines drawn on the electronic drawing board by a user (e.g., a student), and forming a drawing.
And step S3, analyzing the lines in the picture, and determining the outline of a target object formed by the lines, wherein the target object comprises any one object or a plurality of objects in target characters, target articles and target creatures.
Step S4, outlines the target object in the drawing.
And step S5, generating a preset expression form of the target object according to the contour of the target object.
And step S6, outputting the preset expression form of the target object.
The beneficial effects of the above technical scheme are: according to the technical scheme, the lines drawn by the user can be analyzed, the outlines of some target objects contained in the lines are intelligently analyzed, and the target objects expressed in the preset expression form are finally generated and output, so that a game drawing learning scene is provided for the user through an intelligent means, the efficiency of associative teaching is improved, the teacher does not need to manually and artificially teach the user how to associate the drawing, the teacher does not need to teach the user on site personally, and the learning efficiency of the associative drawing is improved.
In one embodiment, the step S3 "analyzing the lines in the drawing to determine the outline of the target object formed by the lines" includes steps S31-S35:
step S31, determining all nodes in the picture, wherein the nodes are the intersection points of at least two lines;
step S32, determining line segment units in the picture, where the line segment units include a dual-node line segment unit and/or a single-node line segment unit, where the dual-node line segment unit includes two nodes connected by a line segment and a line segment connecting the two nodes, and no other node exists on the line segment (for example, as shown in fig. 3, a black circle represents a node, that is, an intersection of a line, A, B is two nodes, and a line segment exists between the two nodes); the line section is one of the lines; a single node line segment unit is a line segment end point or start point (e.g., the intersection point E, F in fig. 3 is a line segment end point, and EF is a single node line segment unit) with one end being a node and the other end being a line segment end point or start point;
step S33, identifying all target line segment units in the picture in the preset direction; the tail node of the last target line segment unit is the head node of the next target line segment unit, and all the target line segment units in the preset direction comprise a first part of target line segment units forming a closed contour;
step S34, determining a first target contour matched with the first closed contour in the contour library according to a contour library pre-stored with respective contours of a plurality of objects and the first closed contour;
and step S35, when a first target contour matched with the first closed contour exists in the contour library, regarding the first closed contour as the contour of the target object.
For example, as shown in fig. 2, the preset direction is counterclockwise, the nodes are L, M, N, O, P, Q, R, S, and the nodes are L-M, M-N, N-O, O-P, P-Q, Q-R, R-S are all dual-node line segment units, and these dual-node line segment units are connected end to end along the counterclockwise nodes to form a closed contour, and a contour matching the closed contour can be found in the contour library, and is a contour of a bow, so that the closed contour is regarded as a contour of the target object.
Accordingly, the aforementioned step S5 "generating the preset expression form of the target object according to the contour of the target object" may be implemented as follows:
acquiring a target object picture corresponding to a pre-stored first target contour;
the aforementioned step S6 "outputting the preset expression form of the target object" may be implemented as:
covering the target object picture on lines corresponding to the outline of the target object in the picture to form a covered picture; and outputting the covered picture.
In one embodiment, determining a target contour in a contour library matching a closed contour according to a contour library and the closed contour, wherein the contour library is pre-stored with respective contours of a plurality of objects, includes:
a1, determining all nodes in the picture, wherein the nodes are the intersection points of at least two lines;
a2, determining line segment units in a picture, wherein the line segment units comprise dual-node line segment units and/or single-node line segment units, the dual-node line segment units comprise two nodes connected by line segments and line segments connecting the two nodes, and no other node exists on the line segments; the line section is one of the lines; the single-node line segment unit is an end point or a starting point of which one end is a node and the other end is a line segment;
step A3, identifying all target line segment units in the picture in the preset direction; the tail node of the last target line segment unit is the head node of the next target line segment unit, and all the target line segment units in the preset direction comprise a first part of target line segment units forming a closed contour; the first partial target line segment unit comprises a single-node line segment unit sharing a node with at least one target line segment unit in the first partial target line segment unit;
step A4, regarding the end point or the start point of a line segment in the single node line segment unit in the second part of target line units as a free node;
step A5, along the preset direction, connecting the free node of the single node line segment unit in the second part of target line units with the first node of the next target line segment unit of the single node line segment unit through a line with a preset shape, so that all the target line segment units in the preset direction form a second closed contour; the preset lines comprise curve lines or straight lines;
step A6, determining a second target contour matched with a second closed contour in a contour library according to the contour library and the second closed contour, wherein the contour library is pre-stored with respective contours of a plurality of objects;
step a7, when there is a second target contour matching the second closed contour in the contour library, the first closed contour is regarded as the contour of the target object.
For example, as shown in FIG. 3, E-F, E-G are all single node line segment cells; according to the steps A1-A3, F is a free node of E-F, G is a free node of E-G, and E is a head node of a next line segment unit E-G of E-F, so that a line is used for connecting F and E; similarly, a line is used to connect between G and E, finally forming the structure shown in FIG. 4; in fig. 4, the predetermined direction is counterclockwise, the nodes are A, B, C, D, E, F, G, H, I, J, K in sequence, the formed outline is a closed outline, and the target object corresponding to the closed outline is a bird.
Accordingly, the aforementioned step S5 "generating the preset expression form of the target object according to the contour of the target object" may be implemented as follows:
acquiring a target object picture corresponding to a pre-stored second target contour;
the aforementioned step S6 "outputting the preset expression form of the target object" may be implemented as:
covering the target object picture on lines corresponding to the outline of the target object in the picture to form a covered picture; and outputting the covered picture.
The beneficial effects of the above technical scheme are: covering the target object picture on lines corresponding to the outline of the target object in the picture to form a covered picture; the covered picture is output, so that the user can look up the target object formed by the line-type picture drawn by the user, the line drawn at will can finally form the drawing, and a game learning scene is intelligently provided for the user.
Corresponding to the game-based learning scene making method provided by the embodiment of the invention, the embodiment of the invention provides a game-based learning scene making system, which comprises the following steps:
the display module is used for displaying an electronic drawing board;
the receiving module is used for receiving lines drawn on the electronic drawing board by a user to form a picture;
the analysis module is used for analyzing lines in the picture and determining the outline of a target object formed by the lines, wherein the target object comprises any one object or a plurality of objects in target characters, target objects and target organisms;
a delineation module for delineating the contour of the target object in the drawing;
the production module is used for generating a preset expression form of the target object according to the contour of the target object;
and the output module is used for outputting the preset expression form of the target object.
In one embodiment, the analysis module is further configured to:
determining all nodes in the picture, wherein the nodes are the intersection points of at least two lines;
determining line segment units in the picture, wherein the line segment units comprise dual-node line segment units and/or single-node line segment units, the dual-node line segment units are two nodes connected by line segments and the line segments connecting the two nodes, and no other node exists on the line segments; the line section is one of the lines; the single-node line segment unit is an end point or a starting point of which one end is a node and the other end is a line segment;
identifying all target line segment units in the picture in the preset direction; the tail node of the last target line segment unit is the head node of the next target line segment unit, and all the target line segment units in the preset direction comprise a first part of target line segment units forming a closed contour;
determining a first target contour matched with the first closed contour in a contour library according to the contour library in which respective contours of a plurality of objects are prestored and the first closed contour;
the first target contour is considered as a contour of the target object.
In one embodiment, the determining a target contour in the contour library matching the closed contour according to a contour library pre-storing respective contours of a plurality of objects and the closed contour includes:
determining all nodes in the picture, wherein the nodes are the intersection points of at least two lines;
determining line segment units in a picture, wherein the line segment units comprise double-node line segment units and/or single-node line segment units, the double-node line segment units comprise two nodes connected by line segments and line segments connecting the two nodes, and no other node exists on the line segments; the line section is one of the lines; the single-node line segment unit is an end point or a starting point of which one end is a node and the other end is a line segment;
identifying all target line segment units in the picture in a preset direction; the tail node of the last target line segment unit is the head node of the next target line segment unit, and all the target line segment units in the preset direction comprise a first part of target line segment units forming a closed contour; the first partial target line segment unit comprises a single-node line segment unit sharing a node with at least one target line segment unit in the first partial target line segment unit;
regarding an end point or a start point of a line segment in a single node line segment unit in the second partial target line unit as a free node;
connecting a free node of a single-node line segment unit in the second part of target line units with a first node of a next target line segment unit of the single-node line segment unit along the preset direction through a line with a preset shape, so that all target line segment units in the preset direction form a second closed contour;
determining a second target contour matched with the second closed contour in the contour library according to a contour library pre-stored with respective contours of a plurality of objects and the second closed contour;
the second target contour is considered as a contour of the target object.
In one embodiment, the generating a preset expression form of the target object according to the contour of the target object includes:
acquiring a pre-stored target object picture corresponding to the first target contour;
the outputting of the preset expression form of the target object comprises:
covering the target object picture on a line corresponding to the outline of the target object in the picture to form the covered picture;
and outputting the covered picture.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (10)
1. A game learning scene making method is characterized by comprising the following steps:
displaying an electronic drawing board;
receiving lines drawn on the electronic drawing board by a user to form a picture;
analyzing lines in the picture, and determining the outline of a target object formed by the lines, wherein the target object comprises any one object or a plurality of objects in target characters, target objects and target organisms;
outlining the target object in the drawing;
generating a preset expression form of the target object according to the contour of the target object;
and outputting a preset expression form of the target object.
2. The method of claim 1,
the analyzing the lines in the picture to determine the outline of the target object formed by the lines comprises the following steps:
determining all nodes in the picture, wherein the nodes are the intersection points of at least two lines;
determining line segment units in the picture, wherein the line segment units comprise dual-node line segment units and/or single-node line segment units, the dual-node line segment units are two nodes connected by line segments and the line segments connecting the two nodes, and no other node exists on the line segments; the line section is one of the lines; the single-node line segment unit is an end point or a starting point of which one end is a node and the other end is a line segment;
identifying all target line segment units in the picture in the preset direction; the tail node of the last target line segment unit is the head node of the next target line segment unit, and all the target line segment units in the preset direction comprise a first part of target line segment units forming a closed contour;
determining a first target contour matched with the first closed contour in a contour library according to the contour library in which respective contours of a plurality of objects are prestored and the first closed contour;
when a first target contour matching the first closed contour exists in the contour library, the first closed contour is regarded as the contour of the target object.
3. The method of claim 1,
determining a target contour matched with the closed contour in the contour library according to a contour library pre-stored with respective contours of a plurality of objects and the closed contour, including:
determining all nodes in the picture, wherein the nodes are the intersection points of at least two lines;
determining line segment units in a picture, wherein the line segment units comprise double-node line segment units and/or single-node line segment units, the double-node line segment units comprise two nodes connected by line segments and line segments connecting the two nodes, and no other node exists on the line segments; the line section is one of the lines; the single-node line segment unit is an end point or a starting point of which one end is a node and the other end is a line segment;
identifying all target line segment units in the picture in a preset direction; the tail node of the last target line segment unit is the head node of the next target line segment unit, and all the target line segment units in the preset direction comprise a first part of target line segment units forming a closed contour; the first partial target line segment unit comprises a single-node line segment unit sharing a node with at least one target line segment unit in the first partial target line segment unit;
regarding an end point or a start point of a line segment in a single node line segment unit in the second partial target line unit as a free node;
connecting a free node of a single-node line segment unit in the second part of target line units with a first node of a next target line segment unit of the single-node line segment unit along the preset direction through a line with a preset shape, so that all target line segment units in the preset direction form a second closed contour;
determining a second target contour matched with the second closed contour in the contour library according to a contour library pre-stored with respective contours of a plurality of objects and the second closed contour;
the second target contour is considered as a contour of the target object.
4. The method of claim 3, wherein the predetermined shape of the line comprises a curved line or a straight line.
5. The method of claim 2,
the generating of the preset expression form of the target object according to the contour of the target object includes:
acquiring a pre-stored target object picture corresponding to the first target contour;
the outputting of the preset expression form of the target object comprises:
covering the target object picture on a line corresponding to the outline of the target object in the picture to form the covered picture;
and outputting the covered picture.
6. The method of claim 3,
the generating of the preset expression form of the target object according to the contour of the target object includes:
acquiring a pre-stored target object picture corresponding to the second target contour;
the outputting of the preset expression form of the target object comprises:
covering the target object picture on a line corresponding to the outline of the target object in the picture to form the covered picture;
and outputting the covered picture.
7. A game learning scene creation system, comprising:
the display module is used for displaying an electronic drawing board;
the receiving module is used for receiving lines drawn on the electronic drawing board by a user to form a picture;
the analysis module is used for analyzing lines in the picture and determining the outline of a target object formed by the lines, wherein the target object comprises any one object or a plurality of objects in target characters, target objects and target organisms;
a delineation module for delineating the contour of the target object in the drawing;
the production module is used for generating a preset expression form of the target object according to the contour of the target object;
and the output module is used for outputting the preset expression form of the target object.
8. The system of claim 7, wherein the analysis module is further configured to:
determining all nodes in the picture, wherein the nodes are the intersection points of at least two lines;
determining line segment units in the picture, wherein the line segment units comprise dual-node line segment units and/or single-node line segment units, the dual-node line segment units are two nodes connected by line segments and the line segments connecting the two nodes, and no other node exists on the line segments; the line section is one of the lines; the single-node line segment unit is an end point or a starting point of which one end is a node and the other end is a line segment;
identifying all target line segment units in the picture in the preset direction; the tail node of the last target line segment unit is the head node of the next target line segment unit, and all the target line segment units in the preset direction comprise a first part of target line segment units forming a closed contour;
determining a first target contour matched with the first closed contour in a contour library according to the contour library in which respective contours of a plurality of objects are prestored and the first closed contour;
the first target contour is considered as a contour of the target object.
9. The system of claim 8,
determining a target contour matched with the closed contour in the contour library according to a contour library pre-stored with respective contours of a plurality of objects and the closed contour, including:
determining all nodes in the picture, wherein the nodes are the intersection points of at least two lines;
determining line segment units in a picture, wherein the line segment units comprise double-node line segment units and/or single-node line segment units, the double-node line segment units comprise two nodes connected by line segments and line segments connecting the two nodes, and no other node exists on the line segments; the line section is one of the lines; the single-node line segment unit is an end point or a starting point of which one end is a node and the other end is a line segment;
identifying all target line segment units in the picture in a preset direction; the tail node of the last target line segment unit is the head node of the next target line segment unit, and all the target line segment units in the preset direction comprise a first part of target line segment units forming a closed contour; the first partial target line segment unit comprises a single-node line segment unit sharing a node with at least one target line segment unit in the first partial target line segment unit;
regarding an end point or a start point of a line segment in a single node line segment unit in the second partial target line unit as a free node;
connecting a free node of a single-node line segment unit in the second part of target line units with a first node of a next target line segment unit of the single-node line segment unit along the preset direction through a line with a preset shape, so that all target line segment units in the preset direction form a second closed contour;
determining a second target contour matched with the second closed contour in the contour library according to a contour library pre-stored with respective contours of a plurality of objects and the second closed contour;
the second target contour is considered as a contour of the target object.
10. The system of claim 8,
the generating of the preset expression form of the target object according to the contour of the target object includes:
acquiring a pre-stored target object picture corresponding to the first target contour;
the outputting of the preset expression form of the target object comprises:
covering the target object picture on a line corresponding to the outline of the target object in the picture to form the covered picture;
and outputting the covered picture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011018195.4A CN112102436A (en) | 2020-09-24 | 2020-09-24 | Game learning scene making method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011018195.4A CN112102436A (en) | 2020-09-24 | 2020-09-24 | Game learning scene making method and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112102436A true CN112102436A (en) | 2020-12-18 |
Family
ID=73756103
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011018195.4A Pending CN112102436A (en) | 2020-09-24 | 2020-09-24 | Game learning scene making method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112102436A (en) |
-
2020
- 2020-09-24 CN CN202011018195.4A patent/CN112102436A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105632251B (en) | 3D virtual teacher system and method with phonetic function | |
Kaput | The representational roles of technology in connecting mathematics with authentic experience | |
CN110570720B (en) | Calligraphy teaching system and teaching method | |
Hu et al. | A Chinese handwriting education system with automatic error detection. | |
CN113033721B (en) | Title correction method and computer storage medium | |
CN112102436A (en) | Game learning scene making method and system | |
CN116630992A (en) | Copybook grid text intelligent matching method and system | |
CN104951810A (en) | Signal processing device | |
JP2003233294A (en) | Instruction system for learning kanji | |
CN116433468A (en) | Data processing method and device for image generation | |
CN107357842A (en) | A kind of multi-screen Knowledge Visualization method of knowledge based classification | |
Heaton | Digital art pedagogy in the United Kingdom | |
CN111063009B (en) | Chinese character writing animation demonstration method and device | |
van Joolingen et al. | Using drawings in knowledge modeling and simulation for science teaching | |
Oktan et al. | A teaching strategies model experiment for computational design thinking | |
JPS6321194B2 (en) | ||
Kottachchi et al. | Slide hatch: Smart slide generator | |
CN112906293A (en) | Machine teaching method and system based on review mechanism | |
Hempe et al. | Taking the step from edutainment to eRobotics-A novel approach for an active render-framework to face the challenges of modern, multi-domain VR simulation systems | |
CN215219934U (en) | Teaching system based on AR recognition | |
CN118071554B (en) | Diversified college piano course teaching system and construction method | |
CN115578422B (en) | Method for realizing livestock counting based on dynamic sensing system | |
Khalilov | Formation of spatial intelligence of students studying in the direction of Art education through analytical drawing | |
Titchiev | Collaborative learning modelled by High-Level Petri nets | |
JPH01222288A (en) | Kanji (chinese character) exercising device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |