CN113426101B - Teaching method, device, equipment and computer readable storage medium - Google Patents

Teaching method, device, equipment and computer readable storage medium Download PDF

Info

Publication number
CN113426101B
CN113426101B CN202110695740.1A CN202110695740A CN113426101B CN 113426101 B CN113426101 B CN 113426101B CN 202110695740 A CN202110695740 A CN 202110695740A CN 113426101 B CN113426101 B CN 113426101B
Authority
CN
China
Prior art keywords
teaching
video
background area
character string
string group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110695740.1A
Other languages
Chinese (zh)
Other versions
CN113426101A (en
Inventor
黄岩
蒋儒
陈伟
来晓阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
MIGU Interactive Entertainment Co Ltd
MIGU Culture Technology Co Ltd
Original Assignee
China Mobile Communications Group Co Ltd
MIGU Interactive Entertainment Co Ltd
MIGU Culture Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, MIGU Interactive Entertainment Co Ltd, MIGU Culture Technology Co Ltd filed Critical China Mobile Communications Group Co Ltd
Priority to CN202110695740.1A priority Critical patent/CN113426101B/en
Publication of CN113426101A publication Critical patent/CN113426101A/en
Application granted granted Critical
Publication of CN113426101B publication Critical patent/CN113426101B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/85Providing additional services to players
    • A63F13/86Watching games played by other players
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/33Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections
    • A63F13/335Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/30Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
    • A63F13/35Details of game servers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/74Browsing; Visualisation therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/40Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterised by details of platform network
    • A63F2300/407Data transfer via internet
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The invention discloses a teaching method, a device, equipment and a computer readable storage medium, wherein the teaching method comprises the following steps: acquiring learner information of a learning terminal according to a teaching request sent by the learning terminal; determining teaching videos matched with the learner information from a teaching video library; determining a play starting position of the teaching video according to the learning video of the learning terminal; and superposing the teaching video in the learning video, and playing from the playing starting position of the teaching video. According to the invention, the teaching video is overlapped when the video stream is sent to the learning terminal, so that a learner can synchronously watch the learning video and the teaching video on the learning terminal, and the learner can learn from the operation mode of the learner identified in the teaching video, thereby effectively improving the game operation capability of the learner.

Description

Teaching method, device, equipment and computer readable storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a teaching method, apparatus, device, and computer readable storage medium.
Background
At present, games have become an important component in people's daily lives. People can relieve and release stress by playing games. In the game process, a user can know the actual operation of the game by acquiring teaching videos provided by the game, so that the user can obtain better game experience conveniently.
The existing game teaching mode mainly adopts voice teaching, describes an actual operation mode through voice, and displays game operation results corresponding to the actual operation mode through video. The learner can start the game after watching the teaching video, and cannot practice in similar scenes immediately after knowing the teaching operation mode. For learners, the teaching mode enables the learners to be unable to effectively master various complex operations and operation time points in the game, so that the learners cannot effectively raise the game level, and the game experience is poor.
Disclosure of Invention
The invention mainly aims to provide a teaching method, a device, equipment and a computer readable storage medium, and aims to solve the technical problem that the existing game teaching mode cannot effectively guide learners.
In order to achieve the above object, the present invention provides a teaching method, comprising the steps of:
acquiring learner information of a learning terminal according to a teaching request sent by the learning terminal;
determining teaching videos matched with the learner information from a teaching video library;
determining a play starting position of the teaching video according to the learning video of the learning terminal;
And superposing the teaching video in the learning video, and playing from the playing starting position of the teaching video.
Optionally, the step of determining the play start position of the teaching video according to the learning video of the learning terminal includes:
converting the continuous frame images of the learning videos into a first characteristic character string group, and acquiring a second characteristic character string group corresponding to the continuous frame images of each teaching video from the teaching video library;
matching the first characteristic character string group with all the second characteristic character string groups, and determining a matching characteristic character string group from the second characteristic character string groups meeting the matching requirement;
and determining a corresponding teaching video according to the matched characteristic character string group, and taking the playing position of the continuous frame image corresponding to the matched characteristic character string group as the playing starting position of the teaching video.
Optionally, the step of converting the continuous frame images of the learning video of the learning terminal into a first feature string group, and obtaining a second feature string group corresponding to the continuous frame images of each teaching video from the teaching video library includes:
dividing a plurality of background areas from a learning video of the learning terminal, and converting continuous frame images of each background area into a corresponding first background area characteristic character string group;
Dividing a corresponding background area from each teaching video in the teaching video library, and converting continuous frame images of the background area of each teaching video into a corresponding second background area characteristic character string group;
the step of matching the first characteristic string group with all the second characteristic string groups and determining the matched characteristic string group from the second characteristic string groups meeting the matching requirement comprises the following steps:
selecting a background area as a background area to be matched, sequentially matching a first background area characteristic string group corresponding to the background area to be matched with a second background area characteristic string group, and judging whether the second background area characteristic string group meeting the matching requirement exists or not; the matching requirement is that the matching degree of the first background area characteristic character string group and the second background area characteristic character string group reaches a preset matching threshold;
when a second background area characteristic character string group meeting the matching requirement exists, determining the matching characteristic character string group from the second background area characteristic character string group meeting the matching requirement;
when the second background area characteristic character string group meeting the matching requirement does not exist, updating the background area to be matched according to the preset background area selection sequence, and returning to the execution step: and sequentially matching the first background area characteristic character string group corresponding to the background area to be matched with the second background area characteristic character string group.
Optionally, the step of superimposing the teaching video on the learning video and playing the teaching video from a play start position of the teaching video includes:
determining delay time according to a preset margin rule;
and superposing the teaching video in the learning video, carrying out delay processing on the playing starting position of the teaching video according to the delay time length, and playing from the playing starting position after the delay processing of the teaching video.
Optionally, the step of determining the delay time length according to a preset margin rule includes:
when the play starting position of the teaching video is determined, acquiring corresponding operation duration;
and generating a delay time according to the operation time and a preset initial delay value.
Optionally, after the step of superimposing the teaching video on the learning video and playing the teaching video from the playing start position of the teaching video, the method further includes:
acquiring operation parameters of the learning terminal every interval preset period;
determining a video adjustment amount according to the operation parameter and a preset advance adjustment algorithm;
and adjusting the playing progress of the teaching video according to the video adjustment amount.
Optionally, the step of determining the teaching video matching the learner information from the teaching video library includes:
Acquiring the information of the teaching person corresponding to each teaching video from the teaching video library;
respectively calculating matching parameters of each piece of teaching person information and the learner information according to a preset matching algorithm;
and determining the teaching video corresponding to the matching parameter with the highest matching degree as the teaching video matched with the learner information.
In addition, in order to achieve the above object, the present invention also provides a teaching device, including:
the learning terminal comprises an acquisition unit, a learning unit and a processing unit, wherein the acquisition unit is used for acquiring learner information of the learning terminal according to a teaching request sent by the learning terminal;
the matching unit is used for determining teaching videos matched with the learner information from a teaching video library;
the positioning unit is used for determining the play starting position of the teaching video according to the learning video of the learning terminal;
and the playing unit is used for superposing the teaching video in the learning video and playing the teaching video from the playing starting position of the teaching video.
In addition, in order to achieve the above object, the present invention also provides a teaching device, which includes a memory, a processor, and a teaching program stored in the memory and capable of running on the processor, and the teaching program when executed by the processor implements the steps of the teaching method as described above.
In addition, in order to achieve the above object, the present invention also provides a computer readable storage medium, on which a teaching program is stored, which when executed by a processor, implements the steps of the teaching method as described above.
According to the invention, through receiving the teaching request sent by the learning terminal and acquiring the learner information of the learning terminal, a plurality of matched teaching videos can be determined from the teaching video library according to the learner information, and the most matched teaching video is selected from the plurality of teaching videos. After determining the corresponding learning progress according to the current learning video of the learner, the playing progress of the teaching video can be correspondingly determined as the playing starting position for playing to the learner. When the video stream is sent to the learning terminal, the teaching video can be overlapped in the video stream, so that a learner can synchronously watch the learning video and the teaching video on the learning terminal, and the learner can learn from the operation modes of the learner identified in the teaching video, thereby effectively improving the game operation capability of the learner.
Drawings
FIG. 1 is a schematic diagram of a terminal/device structure of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a first embodiment of the teaching method of the present invention;
FIG. 3 is a flow chart of a second embodiment of the teaching method of the present invention;
FIG. 4 is a schematic flow chart of a sixth embodiment of the teaching method of the present invention;
FIG. 5 is a flowchart of a seventh embodiment of the teaching method of the present invention
Fig. 6 is a schematic diagram of a unit of the teaching device of the present invention.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1, fig. 1 is a schematic diagram of a terminal structure of a hardware running environment according to an embodiment of the present invention.
The terminal of the embodiment of the invention is teaching equipment.
As shown in fig. 1, the terminal may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Optionally, the terminal may also include a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and so on. Among other sensors, such as light sensors, motion sensors, and other sensors. In particular, the light sensor may comprise an ambient light sensor, which may adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor, which may turn off the display screen and/or the backlight when the terminal device is moved to the ear. Of course, the terminal device may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, and the like, which are not described herein.
It will be appreciated by those skilled in the art that the terminal structure shown in fig. 1 is not limiting of the terminal and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and a tutorial program may be included in the memory 1005, which is a type of computer storage medium.
In the terminal shown in fig. 1, the network interface 1004 is mainly used for connecting a background teaching device and performing data communication with the background teaching device; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to call the tutorial program stored in the memory 1005 and perform the following operations:
Acquiring learner information of a learning terminal according to a teaching request sent by the learning terminal;
determining teaching videos matched with the learner information from a teaching video library;
determining a play starting position of the teaching video according to the learning video of the learning terminal;
and superposing the teaching video in the learning video, and playing from the playing starting position of the teaching video.
Referring to fig. 2, the present invention provides a teaching method, in a first embodiment of the teaching method, the teaching method includes the steps of:
step S10, obtaining learner information of a learning terminal according to a teaching request sent by the learning terminal;
the embodiment can be applied to a cloud game platform, and the cloud game platform can be a game server arranged at a far end. The game can run on a cloud server, the calculation rendering of the game is realized through the server, the game picture is encoded in real time, and the video stream pushed to the game terminal is generated. The game terminal can be a learning terminal used by a learner, and after receiving the audio and video stream, the game terminal decodes and plays the audio and video stream to a user, and simultaneously collects an operation instruction triggered by the user and sends the operation instruction to the cloud server, so that game experience is realized.
When a user plays a game as a learner on the learning terminal, a corresponding teaching request can be triggered, and the learning terminal can send the teaching request to the server, so that the server can acquire learner information corresponding to the learning terminal according to the teaching request. It will be appreciated that the learner information may include information parameters, such as the name of the game currently being played by the learner, the character of the game, the level of the game, the current progress, and the availability of the operation, etc., that are capable of reflecting the learner's game capabilities.
Step S20, determining teaching videos matched with the learner information from a teaching video library;
after the server acquires the learner information, the teaching video which is matched with the game capability of the learner can be determined from the learner information. According to the learner information of the learner, the teaching video of the learner with smaller difference from the game operation level of the current learner in the same game level can be determined from the teaching videos in the teaching video library. The learner can effectively improve the game operation capability of the learner by watching the teaching video with the operation level being relatively close. The teaching videos obtained by server matching can be one or more.
It can be understood that, because the calculation and rendering of the game are completed by the server, the server side stores not only the game videos of all players in the game process, but also the player operation steps and operation instructions corresponding to each game video. After the operation steps and the operation instructions of the player are identified in the corresponding game video, the game video of the player can be converted into teaching video and stored in a teaching video library. The learner can watch the original game video of the learner and can watch the actual operation steps and operation instructions of the learner through watching the teaching video, so that the learner can intuitively know the corresponding relation between the game operation and the operation result, and the understanding of the learner on the operation mode is improved.
It should be noted that different games can be experienced on the learning terminal, and the server side can also set different teaching video libraries for different games respectively. When the server side acquires the learner information, the game name of the game which the learner currently enters can be determined from the learner information, and a teaching video library corresponding to the game name is determined, wherein the teaching video in the teaching video library is the teaching video of the game. Through obtaining the game name, the matching range of the teaching video can be reduced, and the matching efficiency is improved. Further, the learner information may further include a game character used by the learner in the game, and the server may further screen out a teaching video corresponding to the game character from a teaching video library corresponding to the game name according to the game character, so as to continuously narrow down a range of the teaching video for matching. Namely, the learner information can select the teaching videos meeting the conditions from the teaching video library, so that the matching range of the teaching videos is reduced, and the matching speed is improved.
Step S30, determining a play starting position of the teaching video according to the learning video of the learning terminal;
after the server determines the teaching video matched with the learner information, if the learner does not start the game currently, the corresponding teaching video can be directly played. If the learner starts the game, the server also needs to determine the play starting position corresponding to the teaching video according to the current game progress of the learner. For example, when the learner has started the game and the game progress of the current level is 50%, if the teaching video is played from the beginning, the progress of the teaching video does not coincide with the current game progress of the learner. The server needs to determine the corresponding playing progress in the teaching video according to the video image of the current learning video of the learner, so that the teaching video can be played from the current game progress of the learner.
And S40, superposing the teaching video in the learning video, and playing from the playing starting position of the teaching video.
After determining the play start position of the teaching video, the server can superimpose the video stream of the teaching video in the game video stream pushed to the learning terminal, and the learning terminal decodes the superimposed video stream and plays the video stream, so that the teaching video can be synchronously displayed while the learning video is displayed. The learner can realize linkage teaching according to the displayed teaching video, for example, after watching the teaching video, the learner can trigger corresponding first input, and after receiving the first input, the learning terminal can respond to the first input and generate corresponding learning video, so that the learner can determine an operation result corresponding to the first input by watching the learning video. The first input may be a key operation, a click operation, a track sliding operation, a gesture operation, or the like, which is triggered by the learner on the learning terminal. The operation mode of a learner is learned through the teaching video, and the corresponding operation result is obtained in the learning video through practicing the corresponding operation, so that the game operation capability is improved.
It will be appreciated that the teaching video may be superimposed within a portion of the display area in the learning video. When a user watches the learning video, the part of the display area displays the teaching video, the area other than the teaching video displays the learning video, after learning the operation mode of the learner according to the teaching video, the learner can immediately adjust own operation steps and operation instructions, practice is carried out in similar scenes, and an operation result corresponding to the adjusted operation mode is determined according to the corresponding display content of the learning video.
In this embodiment, the server may receive the teaching request sent by the learning terminal, obtain learner information of the learning terminal, determine a plurality of teaching videos to be matched from the teaching video library according to the learner information, and select a most matched teaching video from the plurality of teaching videos. After determining the corresponding learning progress according to the current learning video of the learner, the server can correspondingly determine the playing progress of the teaching video as the playing starting position for playing to the learner. When the server sends the video stream to the learning terminal, the teaching video can be overlapped in the video stream, so that a learner can synchronously watch the learning video and the teaching video on the learning terminal, and the learner can learn from the operation mode of the learner identified in the teaching video, thereby effectively improving the game operation capability of the learner.
Further, based on the first embodiment of the present invention, a second embodiment of the teaching method of the present invention is provided, referring to fig. 3, fig. 3 is a schematic flow chart of the second embodiment of the teaching method of the present invention, in this embodiment, the step S30 of determining, according to a learning video of the learning terminal, a play start position of the teaching video includes:
step S31, converting the continuous frame images of the learning videos into a first characteristic character string group, and acquiring a second characteristic character string group corresponding to the continuous frame images of each teaching video from the teaching video library;
step S32, matching the first characteristic character string group with all the second characteristic character string groups, and determining a matched characteristic character string group from the second characteristic character string groups meeting the matching requirement;
and step S33, determining a corresponding teaching video according to the matched characteristic character string group, and taking the playing position of the continuous frame image corresponding to the matched characteristic character string group as the playing starting position of the teaching video.
In this embodiment, the manner in which the server determines the play start position of the teaching video according to the video progress of the learning video may be that, in the current learning video, each frame image of the continuous frame images is converted into a corresponding feature string, and the corresponding feature strings are ordered according to the sequence of the continuous frame images, so as to obtain the first feature string group.
When the matched teaching videos are obtained from the teaching video library, each frame image in the teaching videos is converted into a corresponding characteristic character string through the same conversion algorithm, and the characteristic character strings are correspondingly ordered according to the display sequence of each frame image in the teaching videos, so that a second characteristic character string group is obtained. If the number of the teaching videos obtained by matching is multiple, each teaching video is converted into a corresponding second characteristic character string group, so that multiple second characteristic character string groups are obtained.
The first feature string group includes feature strings corresponding to continuous frame images of a preset frame number, for example, the preset frame number may be 60, and the first feature string group includes a string group formed by 60 feature strings corresponding to 60 continuous frame images in sequence. The second character string group is a character string group formed by the character strings corresponding to all continuous frame images in the teaching video in sequence. It will be appreciated that the number of feature strings in the second set of feature strings is much greater than the first set of feature strings.
When the number of the matched teaching videos is determined to be one in the teaching video library, the first characteristic character string group and the second characteristic character string group are matched, namely, continuous characteristic character strings with preset frame numbers in the first characteristic character strings are matched with the second characteristic character string group, whether continuous characteristic character strings with the same preset length in the second characteristic character string group are consistent with the first characteristic character string group or not is judged, and the preset length can be corresponding to the preset frame numbers or can be adjusted according to the preset frame numbers. For example, when the preset frame number is 60, the first feature string group is 60 continuous feature strings, and it is detected whether 60 continuous feature strings are consistent with the first feature string group in the second feature string group. If the second feature string group has 60 continuous feature strings, the continuous feature strings are the matching feature string groups in the second feature string group. The continuous frame images corresponding to the matched characteristic character string group in the teaching video are the playing progress of the teaching video synchronized with the learning video, so that the playing starting position of the teaching video is determined.
If the second characteristic string group does not have the continuous characteristic string completely consistent with the first characteristic string group, the second characteristic string group is not matched with the first characteristic string group by 100%. In an alternative embodiment, the number of frames of the continuous character strings to be matched may be adjusted, for example, 60 continuous character strings are reduced to the nearest 30 continuous frame images, that is, the first characteristic character string group is 30 continuous characteristic character string groups at this time, and the first characteristic character string group is matched with the second characteristic character string group, so as to determine whether the same 30 continuous characteristic character string groups exist in the second characteristic character string group. If the same 30 continuous characteristic character string groups exist in the second characteristic character string group, it can be determined that continuous frame images corresponding to the 30 continuous characteristic character string groups in the second characteristic character string group correspond to the current progress of the learning video at this time, and the play starting position of the teaching video is determined.
Further, if the continuous characteristic strings completely consistent with the first characteristic string set do not exist in the second characteristic string set after the preset frame number is adjusted, the matching degree requirement can be reduced. For example, the matching degree is reduced to 95%, when the preset frame number is 60, 60×95% =57, the first 57 continuous characteristic strings in 60 continuous characteristic strings in the first characteristic string group are taken, and whether the corresponding 57 continuous characteristic strings in the second characteristic string group are consistent with the first 57 continuous characteristic strings is judged; or taking the last 57 continuous characteristic strings in the 60 continuous characteristic strings, and judging whether the corresponding 57 continuous characteristic strings in the second characteristic string group are consistent with the last 57 continuous characteristic strings.
When the second feature string group has the corresponding continuous feature strings, the matching is determined to be successful, if the second feature string group does not have the corresponding continuous feature strings, the matching degree can be further reduced to 90%, 85% and 80%, and whether the second feature string group has 54, 51 and 48 continuous feature strings which are consistent with the 54, 51 and 48 continuous feature strings in the first feature string group is judged.
It can be understood that, in the above-mentioned cyclic matching method for reducing the matching degree, a matching method for adjusting the preset number of frames may be further added, for example, when the matching degree is reduced to 95% and the preset number of frames is 60, if there are no corresponding 57 continuous character strings in the second character string group, the preset number of frames may be further reduced, for example, reduced to 30, and it is determined whether there are corresponding 28 continuous character strings in the second character string group. And when the preset frame number is reduced and still fails to be matched, restoring the preset frame number to an initial value, reducing the matching degree to 90%, and then re-matching. And circularly matching the first characteristic character string group with the second characteristic character string group by continuously adjusting the matching degree and adjusting the preset frame number until the continuous characteristic character strings are determined from the second characteristic character string group, so that the corresponding continuous frame images, namely the playing starting positions of the teaching video, are determined.
When the number of continuous feature strings of the preset frame number is reduced to match, if the continuous feature string is the first 30 of the 60 continuous feature strings, 30 continuous feature strings matched with the continuous feature strings in the second feature string group correspond to the first 30 continuous feature strings in the first feature string group, and at this time, a frame image corresponding to the video progress of the learning video is actually the 60 th feature string, that is, the last feature string of the 30 continuous feature strings is further 30 frames later. Therefore, the 30 th character string successfully matched in the second character string group is 30 frames later, namely the play starting position of the teaching video corresponding to the video progress of the learning video.
It can be appreciated that when the number of the matched teaching videos is determined to be greater than one in the teaching video library, each teaching video can be respectively matched with the learning video. When the same continuous characteristic character string exists in the second characteristic character string group of a certain teaching video, the teaching video is the best matched teaching video in a plurality of teaching videos.
Further, based on the second embodiment of the present invention, a third embodiment of the teaching method of the present invention is provided, in this embodiment, the step S31 of converting the continuous frame images of the learning video into the first feature string group, and obtaining the second feature string group corresponding to the continuous frame images of each teaching video from the teaching video library includes:
Step S311, dividing a plurality of background areas from a learning video of the learning terminal, and converting continuous frame images of each background area into a corresponding first background area characteristic character string group;
step S312, a corresponding background area is divided from each teaching video in the teaching video library, and continuous frame images of the background area of each teaching video are converted into a corresponding second background area characteristic character string group;
the step S32, the step of matching the first feature string set with all the second feature string sets, and determining the matching feature string set from the second feature string sets that meet the matching requirement, includes:
step S321, selecting a background area as a background area to be matched, sequentially matching a first background area characteristic string group corresponding to the background area to be matched with a second background area characteristic string group, and judging whether the second background area characteristic string group meeting the matching requirement exists or not; the matching requirement is that the matching degree of the first background area characteristic character string group and the second background area characteristic character string group reaches a preset matching threshold;
Step S322, when a second background area characteristic string group meeting the matching requirement exists, determining the matching characteristic string group from the second background area characteristic string group meeting the matching requirement;
step S323, when there is no second background area feature string group meeting the matching requirement, updating the background area to be matched according to the preset background area selection sequence, and returning to the execution step: and sequentially matching the first background area characteristic character string group corresponding to the background area to be matched with the second background area characteristic character string group.
In this embodiment, since the manner in which the learner operates the game character during the game is not exactly the same as that of the learner, even under the same game progress, the frame image of the teaching video does not exactly correspond to the frame image of the learning video, that is, the feature character string corresponding to the frame image of the teaching video is not identical to the feature character string corresponding to the frame image of the learning video at this time. However, it is understood that, during the game, the background area of the learning video, which is not changed by the operation mode of the game character, should correspond to the background area of the teaching video, except that the image area affected by the game character is different from the image of the teaching video due to the difference of the operation mode of the game character by the learner. That is, under the same game progress, the background area of the learning video is consistent with the background area of the teaching video, and the feature character strings corresponding to the two background areas respectively are also consistent. That is, when a frame image according to the current game progress of the learning video is matched with the teaching video to obtain a play start position of the teaching video, only a background area in the frame image of the learning video is converted into a background area characteristic string, and a matching comparison is performed with the background area characteristic string corresponding to the frame image of the teaching video.
It should be noted how the background area is determined in the display area of the teaching video. The server can acquire a plurality of teaching videos with the same game names, the same game roles and the same game level, respectively encode each teaching video, and if the frame image is a P frame, perform discrete cosine transform DCT processing on the code stream of the P frame. The region which is completely consistent in the frame images of the teaching videos is the background region of the game stage, and the region which is inconsistent in the frame images is the non-background region formed by different teaching persons due to different operation modes.
In an optional embodiment, a plurality of background areas may be pre-divided in the learning video of the learning terminal, and accordingly, the same background areas are divided in the teaching video in the same manner. After determining a background area as the background area to be matched, the feature character string matching can be performed on the background area frame image to be matched of the learning video and the background area frame image to be matched of the teaching video. For example, after dividing the background area of the learning video into a left background area, an upper background area, and a right background area. The continuous frame images of the left background area of the learning video can be converted into continuous character strings to form corresponding first background area character string groups. Likewise, successive frame images of the left background region of the teaching video are converted into a second background region feature string set. And matching the continuous characteristic strings in the first background area characteristic string group with the second background area characteristic string group, and judging whether the same continuous characteristic strings exist in the second background area characteristic string group.
If the continuous characteristic strings exist, the continuous characteristic strings matched with the first background area characteristic string group in the second background area characteristic string group meeting the matching requirement can be used as the matching characteristic string group, and continuous frame images corresponding to the continuous characteristic strings in the matching characteristic string group are the playing starting positions of the teaching video. If the background region does not exist, the background region to be matched can be updated according to the preset background region selection sequence, for example, the continuous frame images of the upper background region of the learning video are converted into continuous characteristic character strings according to the sequence of the left background region, the upper background region and the right background region, and the continuous frame images of the upper background region of the teaching video are converted into continuous characteristic character strings to be matched again. In the process of sequentially matching the left background area, the upper background area and the right background area, if the same continuous characteristic character strings exist in the teaching video, the successful matching can be determined, and the matching process can be exited.
Further, based on the first embodiment of the present invention, a fourth embodiment of the teaching method of the present invention is provided, in this embodiment, the step S40 of superimposing the teaching video in the learning video, and playing from a play start position of the teaching video includes:
Step S41, determining a time delay time length according to a preset margin rule;
and step S42, superposing the teaching video in the learning video, carrying out delay processing on the playing starting position of the teaching video according to the delay time length, and playing from the playing starting position after the delay processing of the teaching video.
In this embodiment, when the server obtains the current game progress of the learning video of the learning terminal, the server determines the play start position of the teaching video and pushes the superimposed video stream, and a certain operation processing time is required to be consumed. At this time, the learner still continues to play the game, and the progress of the game for learning the video also continues to advance. Therefore, after the play start position of the teaching video is determined, the game progress in the learning video is not at the original game progress position, and the play start position of the teaching video also needs to be delayed, so that the play progress of the teaching video can keep up with the game progress of the learning video.
In addition, even if the playing progress of the teaching video is completely consistent with the game progress of the learning video, a certain response time exists after the learner sees the operation after the teaching video, and even if the learner immediately performs the corresponding operation after seeing the teaching video, a corresponding delay exists. Therefore, the playing progress of the teaching video should slightly precede the game progress of the learning video so that the learner has a certain reaction time after seeing the teaching video. The server can store allowance rules in advance, after the play starting position of the teaching video is determined, delay time can be determined according to the allowance rules, after the play starting position of the teaching video is correspondingly delayed according to the delay time, the play is performed from the play starting position after the delay processing, so that the operation processing time of the server does not influence the progress matching of the learning video and the teaching video, and corresponding response time can be reserved for learners.
Further, based on the fourth embodiment of the present invention, a fifth embodiment of the teaching method of the present invention is provided, in this embodiment, the step S41 of determining the delay time length according to the preset margin rule includes:
step S411, when determining the play start position of the teaching video, obtaining a corresponding operation duration;
step S412, generating a delay time according to the operation time and a preset initial delay value.
In this embodiment, the server may obtain a corresponding operation duration when determining the play start position of the teaching video according to the prestored margin rule, and superimpose a preset start delay value on the basis of the operation duration to generate the delay duration. The operation time length is the operation time spent from the beginning of the server receiving the teaching request sent by the learning terminal to the determination of the playing starting position of the teaching video by the server. The preset start delay value may be 1 second. When the teaching video is superimposed in the learning video, the influence of the server operation process on the game progress can be eliminated by delaying the corresponding operation time, and the response time can be provided for the learner by delaying the corresponding start delay value, so that the learner can immediately practice through operation in the learning video after seeing the operation mode of the teaching video.
Further, based on the above-mentioned first embodiment of the present invention, a sixth embodiment of the teaching method of the present invention is provided, referring to fig. 4, and fig. 4 is a schematic flow chart of the sixth embodiment of the teaching method of the present invention, in this embodiment, the step S40 of superimposing the teaching video in the learning video, and after the step of playing from the playing start position of the teaching video, further includes:
step S50, acquiring operation parameters of the learning terminal every preset period;
step S51, determining a video adjustment amount according to the operation parameter and a preset advance adjustment algorithm;
and step S52, adjusting the playing progress of the teaching video according to the video adjustment amount.
In this embodiment, the response speed of different learners is different, so that the adaptation degree to the teaching video is also different. After the teaching video is delayed to be played according to the preset initial delay value, the playing progress of the teaching video can be adjusted according to the learning ability and the operation level of a learner, so that the playing progress of the teaching video is more suitable for the learner.
When a learner watches the teaching video, the operation parameters of the learning terminal can be acquired every preset period. The operation parameter may be an operation effective ratio l_p of the learner, which is an effective operand/total number of operations. After determining the operation effective ratio of the learner, the operation effective ratio can be compared with the operation effective ratio obtained by sampling the previous period, so as to determine the progress adjustment mode of the teaching video. For example, when it is determined that the operation effective ratio obtained by the current sampling is increased compared with the operation effective ratio obtained by the previous sampling, the playing progress of the teaching video can be increased and adjusted, that is, the advance of the teaching video relative to the learning video is increased. Correspondingly, if the operation effective ratio of the present time is reduced compared with the operation effective ratio of the last sampling, the advance of the teaching video relative to the learning video can be reduced.
In an alternative embodiment, after each sampling to obtain the operational effective ratio of the learner, the lead parameter TC, tc= (the last sampling point l_p-the previous sampling point l_p) ×t, T is the preset period of the interval.
Video adjustment amount (C) =f (0.2, tc, 0.5);
wherein F represents a TC value when TC is between 0.2 and 0.5 or-0.5 and-0.2; c is 0.5 when TC is more than 0.5, and C is-0.5 when TC is < -0.5; c is-0.2 at-0.2 < TC < 0; at 0< TC <0.2, C is taken to be 0.2.
After the video adjustment quantity C is calculated, if C >0, increasing the video advance quantity by C; if C <0, the video advance is reduced by C.
It will be appreciated that when video adjustment is performed, a corresponding special rule may also be set, so that when the corresponding rule is satisfied, the special adjustment is performed according to the special rule. For example, an adjustment threshold of the operation effective ratio may be preset, and when the sampled operation effective ratio is greater than the adjustment threshold, the playing progress of the teaching video may not be adjusted. The adjustment may be omitted when the value is-0.2 < TC < 0.2. It is also possible to arrange that if TC is between 0.2 and 0.5 three times in succession, no adjustment is made from the third time. Meanwhile, since the normal reaction time of the human body is at least higher than 0.2 seconds, the video advance of the teaching video relative to the learning video is required to be not less than 0.2 seconds no matter how the adjustment is performed.
Further, based on the above-mentioned first embodiment of the present invention, a seventh embodiment of the teaching method of the present invention is provided, referring to fig. 5, fig. 5 is a schematic flow chart of the seventh embodiment of the teaching method of the present invention, in this embodiment, the step S20 of determining, from a teaching video library, a teaching video matching with the learner information includes:
step S21, obtaining the information of the teaching person corresponding to each teaching video from the teaching video library;
step S22, respectively calculating matching parameters of each piece of teaching person information and the learner information according to a preset matching algorithm;
and S23, determining the teaching video corresponding to the matching parameter with the highest matching degree as the teaching video matched with the learner information.
In this embodiment, after obtaining the learner information of the learning terminal, the server may obtain the learner information corresponding to each teaching video from the teaching video library, and calculate the matching parameters of each learner information and the learner information according to a preset matching algorithm. After the matching parameters of all the teaching videos and the learning videos are calculated, the teaching video corresponding to the matching parameter with the highest matching degree can be determined to be the matched teaching video.
For example, the learner information may include a game name l_id, a learner level l_g, a character l_r, a current closing speed l_v=a length of time (seconds) taken by the current gate/(current gate game progress/gate total progress) ×current gate rated standard length of time (seconds)), an operation effective ratio l_p=an effective operand/an operation total number; accordingly, the presenter information may include a game name t_id, a presenter level t_g, a character t_r, an over-closing speed t_v=current gate video duration (seconds)/current gate rated standard duration (seconds), an operation effective ratio t_p=effective operand/operation total number.
It will be appreciated that the teaching video learned by the learner should be one generated by the same character as the learner operating the same game. According to the game names and game roles, teaching videos meeting requirements can be selected from the teaching video library, and therefore the number of the teaching videos matched is reduced. After the server acquires the learner information, the game name and the game role played by the learner can be determined according to the L_ID and the L_R, and the teaching video with the same T_ID and the same L_ID and the same T_R and the same L_R can be extracted from the teaching video library. The server is pre-stored with a corresponding matching algorithm, and for each teaching video, the server can calculate a plurality of difference parameters of learners and teachers, such as game level difference, current level difference and operation effective ratio difference, which are respectively |L_G-T_G|, |L_V-T_V|, and|L_P-T_P|. And respectively setting different weight coefficients for each difference parameter, multiplying each difference parameter by the corresponding weight coefficient, and accumulating to obtain the matching parameters of the teaching video and the learning video.
For example, for the differences of game level, current level progress and operational effective ratio, the weight coefficients may be 1, square and cube, and the matching parameters of the teaching video and the learning video are:
M=|L_G-T_G|+|L_V-T_V|*|L_V-T_V|+|L_P–T_P|*|L_P–T_P|*|L_P–T_P|;
and respectively calculating the matching parameters M of each teaching video in all teaching videos with the same game name and game roles as those of the learning videos, wherein when the matching parameters M are minimum, the matching degree of the teaching video and the learning video is highest, and the teaching video is determined to be the teaching video matched with the learner information.
Further, the teaching video can be set to always follow the progress of the learner, and when the game scene of the learner is retracted, and the video progress of the learning video is retracted, the teaching video is correspondingly retracted to the playing progress matched with the video progress of the learning video. For example, the server may further generate a corresponding feature string according to the background areas of the teaching video and the learning video at preset intervals, determine whether the background areas of the teaching video and the learning video are matched, and when detecting that the background areas of the learning video and the background areas of the learning video are very different (i.e. fast forward, pause and rewind of the scene of the learner may occur), re-determine a new play start position of the teaching video according to new continuous frame images of the learning video, and then continuously determine a video adjustment amount according to an advance adjustment algorithm to adjust a play advance of the teaching video so as to realize scene linkage with the learner.
The learner can also break away from the game state of the current learning video by clicking the display area of the teaching video, and the teaching video is amplified and displayed and normally played. When the learner controls the teaching video to be reduced from the enlarged display state to the partial area display state, the teaching video can be played in a teaching mode again following the game progress of the learning video.
In addition, referring to fig. 6, an embodiment of the present invention further provides a teaching apparatus, including:
an obtaining unit 10, configured to obtain learner information of a learning terminal according to a teaching request sent by the learning terminal;
a matching unit 20 for determining a teaching video matching the learner information from a teaching video library;
a positioning unit 30, configured to determine a play start position of the teaching video according to the learning video of the learning terminal;
and a playing unit 40, configured to superimpose the teaching video on the learning video, and play the teaching video from a play start position of the teaching video.
Optionally, the positioning unit 30 is configured to:
converting the continuous frame images of the learning videos into a first characteristic character string group, and acquiring a second characteristic character string group corresponding to the continuous frame images of each teaching video from the teaching video library;
Matching the first characteristic character string group with all the second characteristic character string groups, and determining a matching characteristic character string group from the second characteristic character string groups meeting the matching requirement;
and determining a corresponding teaching video according to the matched characteristic character string group, and taking the playing position of the continuous frame image corresponding to the matched characteristic character string group as the playing starting position of the teaching video.
Optionally, the positioning unit 30 is configured to:
dividing a plurality of background areas from a learning video of the learning terminal, and converting continuous frame images of each background area into a corresponding first background area characteristic character string group;
dividing a corresponding background area from each teaching video in the teaching video library, and converting continuous frame images of the background area of each teaching video into a corresponding second background area characteristic character string group;
selecting a background area as a background area to be matched, sequentially matching a first background area characteristic string group corresponding to the background area to be matched with a second background area characteristic string group, and judging whether the second background area characteristic string group meeting the matching requirement exists or not; the matching requirement is that the matching degree of the first background area characteristic character string group and the second background area characteristic character string group reaches a preset matching threshold;
When a second background area characteristic character string group meeting the matching requirement exists, determining the matching characteristic character string group from the second background area characteristic character string group meeting the matching requirement;
when the second background area characteristic character string group meeting the matching requirement does not exist, updating the background area to be matched according to the preset background area selection sequence, and returning to the execution step: and sequentially matching the first background area characteristic character string group corresponding to the background area to be matched with the second background area characteristic character string group.
Optionally, the playing unit 40 is configured to:
determining delay time according to a preset margin rule;
and superposing the teaching video in the learning video, carrying out delay processing on the playing starting position of the teaching video according to the delay time length, and playing from the playing starting position after the delay processing of the teaching video.
Optionally, the playing unit 40 is configured to:
when the play starting position of the teaching video is determined, acquiring corresponding operation duration;
and generating a delay time according to the operation time and a preset initial delay value.
Optionally, the teaching device further includes an adjusting unit, and the adjusting unit is used for:
Acquiring operation parameters of the learning terminal every interval preset period;
determining a video adjustment amount according to the operation parameter and a preset advance adjustment algorithm;
and adjusting the playing progress of the teaching video according to the video adjustment amount.
Optionally, the matching unit 20 is configured to:
acquiring the information of the teaching person corresponding to each teaching video from the teaching video library;
respectively calculating matching parameters of each piece of teaching person information and the learner information according to a preset matching algorithm;
and determining the teaching video corresponding to the matching parameter with the highest matching degree as the teaching video matched with the learner information.
The steps of implementing each functional unit of the teaching device may refer to each embodiment of the teaching method of the present invention, which is not described herein.
In addition, the invention also provides teaching equipment, and the terminal comprises: memory, processor, communication bus and instruction program stored on said memory:
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute the teaching program to implement the steps of the foregoing teaching method embodiments.
The present invention also provides a computer-readable storage medium storing one or more programs executable by one or more processors for implementing the steps of the above-described teaching method embodiments.
The specific implementation of the computer readable storage medium of the present invention is substantially the same as the above embodiments of the teaching method, and will not be repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a teaching device, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (8)

1. A teaching method, comprising the steps of:
acquiring learner information of a learning terminal according to a teaching request sent by the learning terminal;
determining a teaching video matched with the learner information from a teaching video library, wherein the teaching video is a teaching video of a learner in the same game level close to the operation level of the learner;
determining a play starting position of the teaching video according to the learning video of the learning terminal;
the teaching video is overlapped in the learning video, and is played from the play starting position of the teaching video, wherein the teaching video always follows the progress of a learner;
the step of determining the play starting position of the teaching video according to the learning video of the learning terminal comprises the following steps:
converting the continuous frame images of the learning videos into a first characteristic character string group, and acquiring a second characteristic character string group corresponding to the continuous frame images of each teaching video from the teaching video library;
Matching the first characteristic character string group with all the second characteristic character string groups, and determining a matching characteristic character string group from the second characteristic character string groups meeting the matching requirement;
determining a corresponding teaching video according to the matched characteristic character string group, and taking the playing position of the continuous frame image corresponding to the matched characteristic character string group as the playing starting position of the teaching video;
the step of converting the continuous frame images of the learning video of the learning terminal into a first characteristic character string group and obtaining a second characteristic character string group corresponding to the continuous frame images of each teaching video from the teaching video library comprises the following steps:
dividing a plurality of background areas from a learning video of the learning terminal, and converting continuous frame images of each background area into a corresponding first background area characteristic character string group;
dividing a corresponding background area from each teaching video in the teaching video library, and converting continuous frame images of the background area of each teaching video into a corresponding second background area characteristic character string group;
the step of matching the first characteristic string group with all the second characteristic string groups and determining the matched characteristic string group from the second characteristic string groups meeting the matching requirement comprises the following steps:
Selecting a background area as a background area to be matched, sequentially matching a first background area characteristic string group corresponding to the background area to be matched with a second background area characteristic string group, and judging whether the second background area characteristic string group meeting the matching requirement exists or not; the matching requirement is that the first background area characteristic character string group and the second background area characteristic character string group are continuous characteristic character strings which are completely consistent and have the character string length reaching a preset length;
when a second background area characteristic character string group meeting the matching requirement exists, determining the matching characteristic character string group from the second background area characteristic character string group meeting the matching requirement;
when the second background area characteristic character string group meeting the matching requirement does not exist, updating the background area to be matched according to the preset background area selection sequence, and returning to the execution step: and sequentially matching the first background area characteristic character string group corresponding to the background area to be matched with the second background area characteristic character string group.
2. The teaching method according to claim 1, wherein the step of superimposing the teaching video in the learning video and playing from a play start position of the teaching video comprises:
Determining delay time according to a preset margin rule;
and superposing the teaching video in the learning video, carrying out delay processing on the playing starting position of the teaching video according to the delay time length, and playing from the playing starting position after the delay processing of the teaching video.
3. The teaching method according to claim 2, wherein the step of determining the delay time length according to a preset margin rule includes:
when the play starting position of the teaching video is determined, acquiring corresponding operation duration;
and generating a delay time according to the operation time and a preset initial delay value.
4. The teaching method according to claim 1, wherein the step of superimposing the teaching video in the learning video and playing from a play start position of the teaching video further comprises:
acquiring operation parameters of the learning terminal every interval preset period;
determining a video adjustment amount according to the operation parameter and a preset advance adjustment algorithm;
and adjusting the playing progress of the teaching video according to the video adjustment amount.
5. The teaching method according to claim 1, wherein the step of determining a teaching video matching the learner information from a teaching video library includes:
Acquiring the information of the teaching person corresponding to each teaching video from the teaching video library;
respectively calculating matching parameters of each piece of teaching person information and the learner information according to a preset matching algorithm;
and determining the teaching video corresponding to the matching parameter with the highest matching degree as the teaching video matched with the learner information.
6. A teaching device, characterized in that it comprises:
the learning terminal comprises an acquisition unit, a learning unit and a processing unit, wherein the acquisition unit is used for acquiring learner information of the learning terminal according to a teaching request sent by the learning terminal;
the matching unit is used for determining teaching videos matched with the learner information from a teaching video library, wherein the teaching videos are teaching videos of the learner in the same game level close to the operation level of the learner;
the positioning unit is used for determining the play starting position of the teaching video according to the learning video of the learning terminal;
the playing unit is used for superposing the teaching video in the learning video and playing the teaching video from the playing starting position of the teaching video, wherein the teaching video always follows the progress of a learner;
the positioning unit is further used for converting the continuous frame images of the learning video into a first characteristic character string group and acquiring a second characteristic character string group corresponding to the continuous frame images of each teaching video from the teaching video library; matching the first characteristic character string group with all the second characteristic character string groups, and determining a matching characteristic character string group from the second characteristic character string groups meeting the matching requirement; determining a corresponding teaching video according to the matched characteristic character string group, and taking the playing position of the continuous frame image corresponding to the matched characteristic character string group as the playing starting position of the teaching video;
The positioning unit is further used for dividing a plurality of background areas from the learning video of the learning terminal and converting continuous frame images of each background area into corresponding first background area characteristic character string groups; dividing a corresponding background area from each teaching video in the teaching video library, and converting continuous frame images of the background area of each teaching video into a corresponding second background area characteristic character string group; selecting a background area as a background area to be matched, sequentially matching a first background area characteristic string group corresponding to the background area to be matched with a second background area characteristic string group, and judging whether the second background area characteristic string group meeting the matching requirement exists or not; the matching requirement is that the matching degree of the first background area characteristic character string group and the second background area characteristic character string group reaches a preset matching threshold; when a second background area characteristic character string group meeting the matching requirement exists, determining the matching characteristic character string group from the second background area characteristic character string group meeting the matching requirement; when the second background area characteristic character string group meeting the matching requirement does not exist, updating the background area to be matched according to the preset background area selection sequence, and returning to the execution step: and sequentially matching the first background area characteristic character string group corresponding to the background area to be matched with the second background area characteristic character string group.
7. A teaching device, characterized in that the teaching device comprises: memory, a processor and a teaching program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the teaching method according to any of claims 1 to 5.
8. A computer readable storage medium, wherein a teaching program is stored on the computer readable storage medium, which when executed by a processor, implements the steps of the teaching method according to any of claims 1 to 5.
CN202110695740.1A 2021-06-22 2021-06-22 Teaching method, device, equipment and computer readable storage medium Active CN113426101B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110695740.1A CN113426101B (en) 2021-06-22 2021-06-22 Teaching method, device, equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110695740.1A CN113426101B (en) 2021-06-22 2021-06-22 Teaching method, device, equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN113426101A CN113426101A (en) 2021-09-24
CN113426101B true CN113426101B (en) 2023-10-20

Family

ID=77757249

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110695740.1A Active CN113426101B (en) 2021-06-22 2021-06-22 Teaching method, device, equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113426101B (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013094820A1 (en) * 2011-12-21 2013-06-27 주식회사 케이티 Apparatus and method for sensory-type learning
CN105243268A (en) * 2015-09-18 2016-01-13 网易(杭州)网络有限公司 Game map positioning method and apparatus as well as user terminal
CN107029429A (en) * 2015-09-30 2017-08-11 索尼互动娱乐美国有限责任公司 The system and method that time shift for realizing cloud game system is taught
CN107396138A (en) * 2016-05-17 2017-11-24 华为技术有限公司 A kind of video coding-decoding method and equipment
US9881084B1 (en) * 2014-06-24 2018-01-30 A9.Com, Inc. Image match based video search
CN110180179A (en) * 2018-02-23 2019-08-30 索尼互动娱乐欧洲有限公司 Videograph and playback system and method
CN110309795A (en) * 2019-07-04 2019-10-08 腾讯科技(深圳)有限公司 Video detecting method, device, electronic equipment and storage medium
CN111212303A (en) * 2019-12-30 2020-05-29 咪咕视讯科技有限公司 Video recommendation method, server and computer-readable storage medium
CN111214829A (en) * 2019-12-30 2020-06-02 咪咕视讯科技有限公司 Teaching method, electronic equipment and storage medium
WO2020110432A1 (en) * 2018-11-26 2020-06-04 株式会社ソニー・インタラクティブエンタテインメント Learning device, foreground region deduction device, learning method, foreground region deduction method, and program
CN111714875A (en) * 2019-03-20 2020-09-29 电子技术公司 System for testing command execution delay in video games
CN112169319A (en) * 2020-09-23 2021-01-05 腾讯科技(深圳)有限公司 Application program starting method, device, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2527755B (en) * 2014-06-28 2019-03-27 Siemens Medical Solutions Usa Inc System and method for retrieval of similar findings from a hybrid image dataset

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013094820A1 (en) * 2011-12-21 2013-06-27 주식회사 케이티 Apparatus and method for sensory-type learning
US9881084B1 (en) * 2014-06-24 2018-01-30 A9.Com, Inc. Image match based video search
CN105243268A (en) * 2015-09-18 2016-01-13 网易(杭州)网络有限公司 Game map positioning method and apparatus as well as user terminal
CN107029429A (en) * 2015-09-30 2017-08-11 索尼互动娱乐美国有限责任公司 The system and method that time shift for realizing cloud game system is taught
CN107396138A (en) * 2016-05-17 2017-11-24 华为技术有限公司 A kind of video coding-decoding method and equipment
CN110180179A (en) * 2018-02-23 2019-08-30 索尼互动娱乐欧洲有限公司 Videograph and playback system and method
WO2020110432A1 (en) * 2018-11-26 2020-06-04 株式会社ソニー・インタラクティブエンタテインメント Learning device, foreground region deduction device, learning method, foreground region deduction method, and program
CN111714875A (en) * 2019-03-20 2020-09-29 电子技术公司 System for testing command execution delay in video games
CN110309795A (en) * 2019-07-04 2019-10-08 腾讯科技(深圳)有限公司 Video detecting method, device, electronic equipment and storage medium
CN111212303A (en) * 2019-12-30 2020-05-29 咪咕视讯科技有限公司 Video recommendation method, server and computer-readable storage medium
CN111214829A (en) * 2019-12-30 2020-06-02 咪咕视讯科技有限公司 Teaching method, electronic equipment and storage medium
CN112169319A (en) * 2020-09-23 2021-01-05 腾讯科技(深圳)有限公司 Application program starting method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于MPEG-7协议的视频检索系统设计;薛玲;李超;熊璋;;北京航空航天大学学报(第07期);18-121 *
基于多特征抽取的商标图像检索;马玉国;中国优秀硕士学位论文全文数据库(第06期);全文 *

Also Published As

Publication number Publication date
CN113426101A (en) 2021-09-24

Similar Documents

Publication Publication Date Title
US11450350B2 (en) Video recording method and apparatus, video playing method and apparatus, device, and storage medium
CN111540055B (en) Three-dimensional model driving method, three-dimensional model driving device, electronic equipment and storage medium
WO2017122900A1 (en) Apparatus and method for operating personal agent
CN110472099B (en) Interactive video generation method and device and storage medium
CN112272324B (en) Follow-up mode control method and display device
CN109168062B (en) Video playing display method and device, terminal equipment and storage medium
CN109248414B (en) Exercise training reminding method, device and equipment and readable storage medium
CN111010598A (en) Screen capture application method and smart television
CN111225287A (en) Bullet screen processing method and device, electronic equipment and storage medium
CN112330371A (en) AI-based intelligent advertisement pushing method, device, system and storage medium
CN114697721A (en) Bullet screen display method and electronic equipment
CN113596574A (en) Video processing method, video processing apparatus, electronic device, and readable storage medium
CN113426101B (en) Teaching method, device, equipment and computer readable storage medium
CN113556599A (en) Video teaching method and device, television and storage medium
CN110971924B (en) Method, device, storage medium and system for beautifying in live broadcast process
CN113596489B (en) Live broadcast teaching method, device, equipment and computer readable storage medium
CN112423136B (en) Video playing method, system, device and computer readable storage medium
CN116828131A (en) Shooting processing method and device based on virtual reality and electronic equipment
CN112462939A (en) Interactive projection method and system
KR20220053021A (en) video game overlay
JP6632134B2 (en) Image processing apparatus, image processing method, and computer program
KR20130058783A (en) The method and using system for recognizing image of moving pictures
US20240114230A1 (en) Method, electronic device, and storage medium for capturing
WO2024051467A1 (en) Image processing method and apparatus, electronic device, and storage medium
CN113079416B (en) Multimedia intelligent control method, client and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant