CN114513694A - Scoring determination method and device, electronic equipment and storage medium - Google Patents

Scoring determination method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114513694A
CN114513694A CN202210145089.5A CN202210145089A CN114513694A CN 114513694 A CN114513694 A CN 114513694A CN 202210145089 A CN202210145089 A CN 202210145089A CN 114513694 A CN114513694 A CN 114513694A
Authority
CN
China
Prior art keywords
scoring
target
video
user
time point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210145089.5A
Other languages
Chinese (zh)
Inventor
李锦华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An International Smart City Technology Co Ltd
Original Assignee
Ping An International Smart City Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An International Smart City Technology Co Ltd filed Critical Ping An International Smart City Technology Co Ltd
Priority to CN202210145089.5A priority Critical patent/CN114513694A/en
Publication of CN114513694A publication Critical patent/CN114513694A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • H04N21/4316Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations for displaying supplemental content in a region of the screen, e.g. an advertisement in a separate window
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/475End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data
    • H04N21/4756End-user interface for inputting end-user data, e.g. personal identification number [PIN], preference data for rating content, e.g. scoring a recommended movie
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Marketing (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application provides a score determining method, a score determining device, electronic equipment and a storage medium, wherein the score determining method comprises the following steps: during the playing of the target video clip of the target video on the first interface, playing the target animation matched with the target video clip on the first interface; displaying the collected user image on a first interface; determining a scoring result corresponding to each scoring time point according to the user image collected in a first time range before each scoring time point and the video image matched with the user image collected in the first time range, and displaying the scoring result corresponding to each scoring time point on a first interface when each scoring time point arrives; and after the target video is played, the comprehensive scoring result is displayed on the first interface, so that equipment resources can be saved. The application relates to a block chain technology, such as target animation can be written into a block chain, so as to be used for obtaining scenes such as the target animation matched with a target video clip.

Description

Scoring determination method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of internet technologies, and in particular, to a score determining method and apparatus, an electronic device, and a storage medium.
Background
At present, with the development of internet technology, human-computer interaction is more mature. For example, the user can make corresponding actions according to the prompt actions in the video, so that the user can be helped to exercise and exercise. In the process of making corresponding actions along with the video, the user cannot know the score of the prompt action made by the user in the video, so that the user cannot lose interest quickly. Therefore, the user needs to score the user action along with the prompt action, but in the prior art, the user image needs to be continuously collected and compared with the corresponding video image for scoring, the power consumption of the process is large for equipment, and more resources are needed for processing the image.
Disclosure of Invention
The embodiment of the application provides a score determining method and device, electronic equipment and a storage medium, which can reduce the power consumption of the equipment and save resources required by image processing.
In one aspect, an embodiment of the present application provides a score determining method, where the method includes:
playing the target video on the first interface;
during playing of a target video clip of the target video, playing a target animation matched with the target video clip on the first interface, wherein the target video clip is any one of a plurality of video clips included in the target video, and the target animation is configured with at least one scoring time point;
during the period of playing the target video clip, calling a camera device to collect a user image, and displaying the user image on the first interface; the frame rate of the target video is consistent with the frame rate of the user image acquired by the camera device;
determining a scoring result corresponding to each scoring time point according to a user image collected in a first time range before each scoring time point and a video image matched with the user image collected in the first time range, and displaying the scoring result corresponding to each scoring time point on the first interface when each scoring time point arrives, wherein the video image is contained in the target video clip;
and after the target video is played, determining a comprehensive scoring result according to the scoring result corresponding to each scoring time point configured for each animation in the plurality of animations corresponding to the target video, and displaying the comprehensive scoring result on the first interface, wherein the plurality of animations corresponding to the target video comprise animations matched with each video clip in the plurality of video clips.
In a second aspect, an embodiment of the present application provides a score display device, including:
the processing unit is used for playing the target video on the first interface;
the processing unit is further configured to play a target animation matched with a target video clip on the first interface during playing of the target video clip of the target video, where the target video clip is any one of a plurality of video clips included in the target video, and the target animation is configured with at least one scoring time point;
the processing unit is further used for calling a camera device to collect a user image during the playing of the target video clip, and displaying the user image on the first interface;
the processing unit is further used for determining a scoring result corresponding to each scoring time point according to the user image acquired in a first time range before each scoring time point and the video image matched with the user image acquired in the first time range;
the display unit is used for displaying the scoring result corresponding to each scoring time point on the first interface when each scoring time point arrives; the video image is included in the target video segment;
the processing unit is further configured to determine a comprehensive scoring result according to a scoring result corresponding to each scoring time point configured for each of multiple animations corresponding to the target video after the target video is played, where the multiple animations corresponding to the target video include an animation matched with each of the multiple video clips;
the display unit is further used for displaying the comprehensive scoring result on the first interface.
In a third aspect, an electronic device in an embodiment of the present application includes a processor and a memory, where the processor and the memory are connected to each other, where the memory is used to store a computer program, and the computer program includes program instructions, and the processor is configured to call the program instructions to execute the score determining method described above.
An aspect of the embodiments of the present application provides a computer-readable storage medium, in which program instructions are stored, and when the program instructions are executed, the computer-readable storage medium is used to implement the scoring determination method described above.
An aspect of the embodiments of the present application provides a computer program product or a computer program, where the computer program product or the computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium, and when the computer instructions are executed by a processor of an electronic device, the computer instructions perform the scoring determination method described above.
In the embodiment of the application, the electronic equipment can play the target video on the first interface; during the process of playing a target video clip of a target video, playing a target animation matched with the target video clip on a first interface, wherein the target video clip is any one of a plurality of video clips included in the target video, and the target animation is configured with at least one scoring time point; during the playing of the target video clip, calling a camera device to acquire a user image, and displaying the user image on a first interface; determining a scoring result corresponding to each scoring time point according to the user image collected in a first time range before each scoring time point and the video image matched with the user image collected in the first time range, and displaying the scoring result corresponding to each scoring time point on a first interface when each scoring time point arrives; the video image is contained in the target video segment; after the target video is played, determining a comprehensive grading result according to the grading result corresponding to each grading time point configured for each animation in the multiple animations corresponding to the target video, and displaying the comprehensive grading result on a first interface, wherein the multiple animations corresponding to the target video comprise the animation matched with each video clip in the multiple video clips. By the evaluation determination method provided by the embodiment of the application, the corresponding scoring result can be displayed when each scoring time point in the animation is reached, the scoring result can be displayed in a short time, so that the interestingness of a user is increased, meanwhile, the scoring can be performed according to the user image collected in the first time range and the video image matched with the user image, the scoring of the user in a period of time can also be obtained, the user image does not need to be continuously obtained and compared with the video image matched with the user image, the power consumption of equipment can be reduced, and resources required by image processing are saved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a score determining system according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a score determining method provided in an embodiment of the present application;
FIG. 3a is a schematic diagram of a first interface displaying a target video clip and a target animation according to an embodiment of the present application;
fig. 3b is a schematic flow chart illustrating a scoring time point displaying a corresponding scoring result according to an embodiment of the present application;
FIG. 3c is a schematic diagram of a next video segment of a target video segment displayed on a first interface according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a score determining apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The embodiment of the application can process the user image or the video image and the like based on the artificial intelligence technology. Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
The artificial intelligence infrastructure generally includes technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and the like.
With the development of internet technology, human-computer interaction is more and more mature. The user can select the favorite video and follow the prompting action in the video to make the same action, so that the user can play a body building role. However, at present, the user is mainly limited to move along with the prompt motion in the video, and the motion of the user cannot be evaluated in a short time and the evaluation result cannot be displayed. In view of the shortcomings of the prior art, the embodiment of the present application provides a rating display scheme, which has the following general principle: when the user wants to move along with the video, the user can select the favorite video, and accordingly the electronic equipment can detect the trigger operation aiming at the target video and play the target video on the first interface. Then, during the playing of the target video segment of the target video, a target animation matched with the target video segment can be simultaneously played in the first interface, and the target animation is configured with at least one scoring time point. The target animation can be used for controlling a scoring time point for scoring the user action, and meanwhile, the target animation can also be used for controlling a scoring result and a sound effect perfect click point for displaying the scoring result at the scoring time point. Then, when the target video clip is played, the user can move along with the prompt action in the target video clip, at the moment, the camera device can collect the user image of the user, and the collected user image is displayed in the first interface. While capturing the user image, the electronic device may capture a video image displayed in the first interface. Optionally, the target video frame rate and the frame rate of the user image acquired by the camera device can be dynamically configured, so that the acquired user image and the video image matched with the user image are in the same time point; therefore, the scoring result of the user action at the time point is determined according to the user image and the video image acquired at the same time point, and a more accurate scoring result is obtained.
Then, before any scoring time point arrives, the electronic device may analyze the user image and the displayed video image collected within a first time range before the any scoring time point, determine a scoring result of the user within the first time range, and display the scoring result corresponding to the any scoring time point on the first interface when the any scoring time point arrives. Wherein, the process of analyzing the user image and the displayed video image collected in the first time range before any scoring time point is as follows: and analyzing the user posture of the user according to the user image collected at each time point in the first time range and the displayed video image. After the target video clip is played, the next video clip of the target video clip and the animation matched with the next video clip can be continuously played in the first interface until the target video playing is finished, and after the target video playing is finished, a comprehensive scoring result is displayed on the first interface or the second interface, the comprehensive scoring result is determined according to the scoring result corresponding to each scoring time point configured by each animation in the plurality of animations, and each animation is matched with any video clip in the plurality of video clips.
The scoring display scheme provided by the embodiment of the application has the following beneficial effects: (1) when the scoring time of the animation configuration corresponding to the video clip is up, the scoring result can be displayed at the scoring time point every time the video clip is played, so that the scoring result can be displayed in a short time, the user can know the scoring result and the user is prevented from paying attention to the scoring. (2) The grading result of the user within a period of time can also be obtained by grading according to the user image collected within the first time range and the video image matched with the user image without continuously obtaining the user image and comparing the user image with the video image matched with the user image, so that the power consumption of the equipment can be reduced, and resources required for processing the image can be saved. (3) The frame rate of the target video is consistent with the frame rate of the user image collected by the camera device, the problem that the video frame rate is different from the frame rate of the user image collected by the camera device is solved, and further the user image collected in the first time range and the displayed video image correspond to a time point, so that the grading result determined by the user image collected in the first time range and the displayed image is more accurate, and meanwhile, after the processing time of the camera device on the user image is reduced as far as possible, the video image played at the grading time point and the displayed user image can be visually synchronized. The reduction of the processing time of the image can be performed on the saturation and color of the image.
Based on a score display scheme, an embodiment of the present application provides a score display system, please refer to fig. 1, which may include at least one terminal device 101 and at least one server 102. The terminal device 101 is equipped with a camera device, through which a user image can be collected, and the camera device may be a camera component of the terminal device itself, and at this time, the terminal device 101 is also equipped with a camera. Optionally, the terminal device 101 may provide a video playing interface for the user, where the video playing interface may play various types of videos including multiple video clips, and simultaneously play an animation corresponding to each video clip in the video playing interface. The terminal device 101 may display the captured user image in the video playing interface and display the scoring result at each scoring time point. The server 102 may store various types of videos, each of which may include multiple video segments and animations, scoring results, etc. corresponding to each video segment. The terminal device 101 and the server 102 may be connected directly or indirectly through wired or wireless communication. The terminal device 101 may be an intelligent device such as a mobile phone, a tablet computer, a notebook computer, a palm computer, a Mobile Internet Device (MID), and a wearable device. The server 102 may include a plurality of servers (also referred to as nodes), which may be independent physical servers, or may be cloud servers that provide basic cloud computing services such as cloud services, cloud databases, cloud computing, cloud functions, cloud storage, web services, cloud communications, middleware services, domain name services, security services, Content Delivery Networks (CDNs), and big data and artificial intelligence platforms.
Based on the above scoring display scheme, please refer to fig. 2, fig. 2 is a schematic flowchart of a scoring determination method provided in an embodiment of the present application, where the scoring determination method may be executed by an electronic device, and the electronic device may be the terminal device 101 or the server 102 in the scoring display system, and the scoring determination method may include the following steps S201 to S205:
s201: and playing the target video on the first interface. Wherein, the target video can be a dance video, a fitness video and the like; the target video may include a plurality of video segments.
In one embodiment, the electronic device may provide a video playing selection interface, where the video playing selection interface may be an interface that is provided by the electronic device and is capable of selecting to play a video; or different types of applications run on the electronic device, and the video playing selection interface may be a video playing selection interface in a target application, where the target application may be a fitness APP (application), a dance APP, and the like. A plurality of videos are displayed in the video playing selection interface. The user can select the favorite target video in the video playing selection interface. When the electronic equipment detects that a target video in the videos is selected, the target video is played on the first interface. The mode of the user selecting the favorite target video in the video playing selection interface can be a single click, a double click, a voice mode and the like. For example, when the user inputs "play target video" by voice, the electronic device may perform semantic analysis on the "play target video" of the voice input, and select the target video based on the voice analysis result. The first interface may be a video play selection interface or the first interface may be another video play interface. When the first interface is other video playing interfaces, and when the electronic equipment detects that a target video in the videos is selected, the electronic equipment can be switched to the first interface from the video playing selection interface, and the target video is played on the first interface.
In one embodiment, the plurality of videos included in the video playing selection interface all correspond to the scoring results, and the electronic device can automatically select the target video with the highest scoring after entering the video playing selection interface and play the target video on the first interface. The user can be benefited to initiate a challenge to the target video with the highest score by playing the target video with the highest score, and interestingness is improved.
S202: during the process of playing a target video clip of a target video, playing a target animation matched with the target video clip on a first interface, wherein the target video clip is any one of a plurality of video clips included in the target video, the target animation is configured with at least one scoring time point, and each video clip corresponds to one animation. The target video segment comprises a standard prompting action, and the target video segment is used for prompting a user to move along with the standard prompting action.
In a specific implementation, each video may be divided into a plurality of video segments, after step S201 is executed, the first interface includes a shooting button, when the user clicks the shooting button, and the electronic device detects a click operation on the shooting button, the electronic device may sequentially play the plurality of video segments of the target video according to a time sequence, and when the target video segment of the target video is played, the electronic device may obtain a target animation matched with the target video segment from the local storage space. Or, the user may select a target video clip to be played from the multiple video clips, and when detecting that a target video clip in the multiple video clips is selected, the electronic device may acquire a target animation matched with the target video clip; and playing the target animation on the first interface during the playing of the target video clip of the target video.
In one embodiment, the target video segment may be played at a first location in the first interface, and the first location may be any location of the first interface; for example, in fig. 3a, the target video clip is played at the upper left corner position. The target animation can be played at a second position in the first interface, and the second position can be any position of the first interface, such as the upper left corner position, the middle position, the right position, and so on of the first interface, as shown in fig. 3a, the target animation is played at the lower position of the first interface.
In one embodiment, the electronic device may equally divide the target video into a plurality of video segments according to the playing time length of the target video, and the playing time length of each video segment is the same. Or the electronic equipment can randomly divide the target video into a plurality of video segments according to the playing time of the target video; or, the target video includes a plurality of standard prompt actions, and the electronic device may divide the target video into a plurality of video segments according to the plurality of standard prompt actions in the target video, where each video segment includes one standard prompt action.
After dividing the target video clip into a plurality of video clips, the electronic device may display a second interface, the second interface may include the plurality of video clips of the target video, and in response to an animation configuration operation on a target video clip of the plurality of video clips, may display a third interface, the third interface including a set of candidate animations; wherein the candidate animation set may include one or more animations. The target video segment corresponds to an animation configuration button, the animation configuration operation may be a single click or a double click on the animation configuration button, or the animation configuration of each video segment may correspond to a fixed gesture, and the animation configuration operation may be an operation of inputting a corresponding gesture, for example, an M gesture, an OK gesture, or the like. The user then enters a corresponding gesture, and the electronic device may respond to an animation configuration operation for a target video clip of the plurality of video clips. When the animation selection operation aiming at the candidate animation set is detected, the selected animation is determined as the target animation, and at least one scoring time point is configured for the target animation in response to the scoring time configuration operation of the target animation. The animation selection operation can be a single click or a double click on a certain animation. Each animation corresponds to a scoring time configuration button, and the scoring time configuration operation of the target animation can be single-click or double-click on the scoring time configuration button. The playing time of the target animation may be consistent with the playing time of the target video clip, that is, the playing life cycle of the target animation is the same as the playing life cycle of the target video clip. Specifically, at least one scoring time point configured for the target animation can be configured in the target animation according to scoring requirements, and a scoring result can be displayed at each scoring time point subsequently. Then, when it is detected that the plurality of video clips are all configured to complete corresponding animations, each video clip and the corresponding animation may be stored in association and stored in a local storage space of the electronic device, or each video clip and the corresponding animation may be stored in association and stored in a blockchain network. Setting a corresponding animation for any of the plurality of video clips may be in accordance with a specific implementation of setting a corresponding target animation for the target video clip. At this time, the animation content in the target animation can be a standard prompt action in the target video clip; or the animation content in the target animation may be a landscape, an animal, or the like.
In one embodiment, when an animation selection operation for a candidate animation set is detected, a specific implementation manner of determining the selected animation as the target animation may be: when an animation selection operation for the candidate animation set is detected, a corresponding animation can be selected from the candidate animation set through the animation selection operation, and the selected animation is determined as a target animation. Wherein, the animation included in the candidate animation set can comprise a playing time length and animation content, and the animation selection operation can be generated based on the playing time length and the animation content of the target video clip. That is, when selecting the corresponding animation for the target video clip, the playing duration and the animation content of the target video clip are considered.
In one embodiment, a second interface is displayed, the second interface including a plurality of video clips of a target video; when responding to animation configuration operation of a target video clip in a plurality of video clips, identifying a standard user action (namely a standard prompt action) in the target video clip, and configuring a corresponding target animation as the target video clip based on the standard user action; and responding to the scoring time configuration operation of the target animation, and configuring at least one scoring time point for the target animation.
Wherein, in response to the scoring time configuration operation on the target animation, configuring at least one scoring time point for the target animation may be: in response to the scoring time configuration operation on the target animation, one or more scoring time points are randomly set according to the playing time length of the target animation, or one or more scoring time points can be averagely set according to the playing time length of the target animation. For example, the playing time of the target animation is 1 minute, and the 5 th second and the 10 th second can be set as scoring time points; or directly setting a scoring time point, namely setting 30 seconds as the scoring time point, or setting 60 seconds as the scoring time point, and the like. In one embodiment, each video clip corresponds to an animation, and at least one scoring time point set by each animation may be the same or different. It should be noted that the playing duration of each video segment can be understood as the playing life cycle of the video segment. When the playing life cycle of any video clip is over, it means that the next video clip can be played.
S203: and during the playing of the target video clip, calling a camera device to acquire a user image, and displaying the user image on a first interface.
In one embodiment, the frame rate of the target video can be dynamically adjusted to be consistent with the frame rate of the user image acquired by the camera device, so that the problem that the video frame rate is different from the frame rate of the user image acquired by the camera device is solved, the user image acquired in the first time range and the video image matched with the user image acquired in the first time range correspond to one time point, the grading result determined by the user image acquired in the first time range and the image is more accurate, and meanwhile, when the processing time of the camera device on the user image is reduced as much as possible, the video image played at the grading time point and the displayed user image can be visually synchronized. The frame rate of the target video refers to a playing frame rate of the target video, that is, the playing frame rate refers to the number of frames played per second, and the frame rate at which the camera device acquires the user images refers to the number of frames at which the camera device acquires the user images per second. If the camera device captures 30 frames in 1 second, the frame rate for capturing the user image may be 33 milliseconds to capture 1 frame, and so on. Alternatively, the frame rate at which the camera device may capture the user image may be different for different types of camera devices. In this case, the frame rate of the target video may also change with the frame rate at which the user image is captured by the camera. The target video segment may be composed of a plurality of frames of user images.
In one embodiment, the electronic device determines a time length required from the acquisition of the first user image to the display of the first user image, and then may automatically adjust internal parameters in the camera device according to the time length required to be consumed, and may reduce processing time of the camera device for image saturation, color, image de-jitter, de-noise, and the like by adjusting the internal parameters, thereby shortening processing time of the acquired user image, and may effectively ensure that a user image currently displayed on the first interface and a displayed video image matched with the user image are consistent in time and vision.
In one embodiment, if the target video is played for 1 second and 1 frame, the target video clip of the target video is also played for 1 second and 1 frame, it is assumed that the user image acquired at the 4 th second needs to be compared with the video image displayed at the 4 th second, at this time, the video image displayed at the 4 th second needs to be acquired, the user image of the user is acquired at the 4 th second by using the camera device, and then the displayed video image and the acquired user image are compared and scored. However, it is obvious that if the playing frame rate of the target video is 1 second and 1 frame is obviously not matched with the frame rate of the camera (1 second 30 frames), the video image displayed in the 4 th second and the collected user image cannot be matched; meanwhile, the standard prompt action in the target video clip seems to be visually obviously not corresponding to the scoring animation and the scoring sound effect when the scoring result is displayed, so that the best method for solving the problem is to set the playing progress callback time granularity of the video player to be a little smaller, set the frame rate of the target video to be the frame rate of the camera device, and set the frame rate of the target video to be 33 milliseconds to collect a frame if the camera device is 1 second and 30 frames, namely 33 milliseconds to collect a frame, so that the displayed video image is compared with the user image collected by the camera device on the premise of only 1 frame difference (33 milliseconds) each time, and a more accurate comparison result can be obtained; meanwhile, the standard prompt action in the user image and the user image action can be ensured to be the same action visually, and meanwhile, the scoring and sound effect synchronization can be ensured, and the experience is improved. In a specific implementation, the frame rate of the target video is adjusted along with the frame rates of the different types of camera devices for acquiring the user images. Before the electronic device plays the target video on the first interface, the frame rate of the target video can be dynamically adjusted to the frame rate of the user image collected by the camera device. Specifically, before the target video is played on the first interface, a configuration request for a frame rate of the target video may be received, and the electronic device obtains an initial frame rate of the target video and a frame rate at which the camera device captures the user image based on the configuration request; then judging whether the initial frame rate of the target video is consistent with the frame rate of the user image acquired by the camera device, and when the initial frame rate of the target video is determined to be inconsistent with the frame rate of the user image acquired by the camera device, updating the initial frame rate of the target video to the frame rate of the user image acquired by the camera device by the electronic equipment to obtain the updated frame rate of the target video; and determining the updated frame rate of the target video as the frame rate of the target video. When the initial frame rate of the target video is determined to be consistent with the frame rate of the user image collected by the camera device, the initial frame rate of the target video does not need to be modified.
In one embodiment, the user can make a corresponding action according to the standard prompt action in the target video segment, and the electronic device can invoke the camera device to capture a user image, wherein the user image comprises a user gesture of the user making the corresponding action along with the standard prompt action. The camera device can be a camera component of the electronic device, such as a camera, and the like; or the image pickup device is an image pickup apparatus dedicated to image pickup, such as a video camera. After acquiring the user image, the electronic device may display the user image at a target position of the first interface, where the target position may be any position in the first interface, for example, the target position may be a middle position, a lower position, and the like of the first interface; as another example, the target position may be the position shown in FIG. 3 a. In one embodiment, the electronic device may capture user images of the user in real-time during the playing of the target time segment, the user images of the user being displayed in the first interface once per capture of the user image of the user. Optionally, a joint point included in the user image may also be displayed, as indicated by the black dot in the user image shown in fig. 3 a. In one embodiment, when the user image is displayed, a plurality of joint points included in the first user image, connecting lines between the joint points, and a first included angle between adjacent connecting lines can be displayed; simultaneously displaying a plurality of joint points, connecting lines among the joint points and a second angle between adjacent connecting lines which are included in the video image matched with the first user image; through the comparison between the first angle and the second angle, the difference between the standard prompt action in the user and the target video clip can be visually known through angle matching, and if the first angle is different from the second angle, the user can quickly adjust the action of the user according to the second angle. For example, hip joint points and foot joint points in the user image form a connecting line 1; the hip joint point and the shoulder joint point form a connecting line 2, and a first angle is formed between the connecting line 1 and the connecting line 2; a connecting line 3 is formed by the hip joint points and the foot joint points of the video images matched with the user images; the hip joint point and the shoulder joint point form a line 4, a second angle between the line 3 and the line 4. And then judging whether the first angle and the second angle are the same or not, and if not, adjusting the action by the user.
In one embodiment, before step S204 is executed, the playing of the target animation may be paused due to external influences, and it is understood that: when the target animation is paused, the playing of the target video clip and the like are paused. The external influence refers to artificial play pause, video buffering, and the like. At the moment, when receiving an animation interrupt signal, pausing playing the target animation and recording the playing time length of the target animation; and when receiving the animation starting signal, determining a playing position based on the played time length, and continuously playing the target animation by taking the playing position as a starting point.
S204: determining a scoring result corresponding to each scoring time point according to the user image collected in a first time range before each scoring time point and the video image matched with the user image collected in the first time range, and displaying the scoring result corresponding to each scoring time point on a first interface when each scoring time point arrives; the video image is included in the target video segment, that is, the target video segment may be composed of a plurality of frames of video images. The matching in the video image matching with the user image captured in the first time range may be a matching at a time point or a matching at an action. For example, a temporal match, where a user image is captured at 4s, the video image matched to the user image may be the video image displayed at 4 s; for another example, the action matching includes a squat action in the user image captured in the first time range, and the video image matched with the user image also includes a squat action. In one embodiment, the user image captured by the camera at each time point and the video image displayed on the first interface at the first time point may also be matched.
In specific implementation, user posture analysis is performed according to a user image collected in a first time range before each scoring time point and a video image matched with the user image collected in the first time range, so that a scoring result corresponding to each scoring time point is obtained, then the electronic device can continuously obtain the current playing time of the target animation, judge whether the current playing time is any scoring time point in at least one scoring time point, and display the scoring result corresponding to the any scoring time point on a first interface if the current playing time is any scoring time point in at least one scoring time point. The scoring result can be displayed at any position of the first interface. For example, as shown in fig. 3b, when the human being in the target animation moves from right to left, the electronic device may continuously determine whether the current playing time is any scoring time point of the at least one scoring time point, and when it is determined that the current playing time is any scoring time point of the at least one scoring time point (when the current playing time reaches the scoring time point shown in fig. 3 b), 95 points are displayed in the first interface.
In one embodiment, the target video segment is composed of a plurality of frames of video images, and playing the target video segment may be understood as displaying each frame of video image on the first interface. Before each scoring time point (e.g., target scoring time point) is reached, the electronic device may determine a scoring result corresponding to each scoring time point (e.g., target scoring time point) according to the user image captured in the first time range and the video image matched with the user image captured in the first time range, and then display the scoring result corresponding to each scoring time point (e.g., target scoring time point) on the first interface. Wherein the first time range may be located between the scoring time point (e.g., target scoring time point) and a previous scoring time point of the scoring time point (e.g., target scoring time point). It is understood that the first time range before each scoring time point may be a partial time range or a full time range formed between each scoring time point (e.g., a target scoring time point) and a previous scoring time point of each scoring time point (e.g., a target scoring time point), for example, the scoring time point is 4 seconds, the previous scoring time point of the scoring time point is 1 second, the first time range may be 3 seconds (i.e., 2 seconds, 3 seconds), or the first time range may be any 1 second from 1 second to 4 seconds, and so on.
In one embodiment, since the rating result may be determined unreasonably when the video image matching the user image captured in the first time range and the user image captured in the first time range is different due to different time points or the user action included in the user image and the standard prompt action included in the video image are not the same action, in order to ensure that the rating result displayed at each rating time point is relatively accurate, for a target rating time point in at least one rating time point, a first rating result corresponding to the target rating time point is determined according to the user image captured in the first time range before the target rating time point and the video image matching the user image captured in the first time range; judging whether the first grading result is larger than or equal to a grading result threshold value or not, if the first grading result is smaller than the grading result threshold value, acquiring a user image collected in a second time range before the target grading time point and a second grading result corresponding to the target grading time point by a video image matched with the user image collected in the second time range, wherein the second time range comprises the first time range; and then determining a scoring result corresponding to the target scoring time point based on the second scoring result and the first scoring result. And if the first scoring result is determined to be greater than or equal to the scoring result threshold, taking the first scoring result as the scoring result corresponding to the target time scoring point. The target scoring time point is any one of the at least one scoring time points.
In an embodiment, after determining the second scoring result, a difference between the first scoring result and the second scoring result may be determined, and if the difference between the first scoring result and the second scoring result is greater than a difference threshold, it indicates that the first scoring result and the second scoring result may be caused by different time points or that the user action included in the first user image and the standard prompt action included in the first video image are not the same action, at this time, the electronic device may re-determine the first time range and the second time range as a new time range, and determine the scoring result corresponding to the scoring time point according to the first user image acquired in the new time range and the first video image matched with the first user image. By the method, the accuracy of the scoring result can be effectively improved. If the difference between the first scoring result and the second scoring result is less than or equal to the difference threshold, a new time range does not need to be determined.
Wherein the second time range includes the first time range means that the second time range may include the first time range. It can be understood that when the first score is lower than the score threshold, the second time range is obtained K time points before or after the first time range, for example, the score time point is 5 seconds, the first time range is 1 second, 2 second, and 3 second, and the second time range may be obtained 1 time point before, that is, 4 second. The electronic device can then determine a scoring result corresponding to the scoring time point based on the second scoring result and the first scoring result. Specifically, the first scoring result and the second scoring result may be summed to obtain a scoring result corresponding to the scoring time point. The evaluation is carried out through the first time scoring result and the second time scoring result, so that the accuracy of the scoring result corresponding to the target scoring time point can be effectively improved. Or the scoring result with the largest scoring result is taken as the scoring result corresponding to the target scoring time point from the first scoring result and the second scoring result, so that the enthusiasm of the user can be improved.
The specific implementation manner of determining the second scoring result corresponding to the target scoring time point according to the user image acquired in the second time range before the target scoring time point and the video image matched with the user image acquired in the second time range can be the same as that of determining the first scoring result corresponding to the target scoring time point according to the user image acquired in the first time range before the target scoring time point and the video image matched with the user image acquired in the first time range, so that the first scoring result corresponding to the target scoring time point is determined according to the user image acquired in the first time range before the target scoring time point and the video image matched with the user image acquired in the first time range.
Specific implementations in which the electronic device may determine the first scoring result corresponding to the target scoring time point according to the user image captured in the first time range before the target scoring time point and the video image matched with the user image captured in the first time range may include s11-s 13:
s 11: the method comprises the steps of carrying out human body joint point detection on a user image collected in a first time range to obtain a first human body joint point detection result of the user image collected in the first time range, carrying out human body joint point detection on a video image matched with the user image collected in the first time range to obtain a second human body joint point detection result of the video image matched with the user image collected in the first time range. The camera device collects the user image of the user, so that the human body joint point detection can be carried out on the user image.
In a specific implementation, the electronic device preprocesses a user image acquired within a first time range to obtain a preprocessed user image, and then performs human body joint detection on the preprocessed user image by using a preset joint identification model to obtain a first human body joint detection result of the user image acquired within the first time range, wherein the preset joint identification model is obtained by training based on a plurality of training user images and joint labels corresponding to each training user image; the pre-treatment may include: denoising the target frame image, and unifying the size of the target frame image. The preset joint point recognition model can be a human body posture estimation openposition. Openpos can detect all the joints of the user in the user image.
It should be noted that, for the specific implementation manner of performing human body joint detection on the video image matched with the user image acquired in the first time range to obtain the second human body joint detection result of the video image matched with the user image acquired in the first time range, reference may be made to the specific implementation manner of performing human body joint detection on the user image acquired in the first time range to obtain the first human body joint detection result of the user image acquired in the first time range, which is not described herein again.
s 12: and constructing a user posture evaluation parameter set according to the first human body joint point detection result and the second human body joint point detection result, wherein the user posture evaluation parameter set comprises at least one posture evaluation parameter.
In one embodiment, the user images captured within the first time range may comprise a target user image, and the video image matching the target user image comprises the target video image. Wherein the first time range includes one or more time points. The target time point is any one of the one or more time points. The first human joint detection result may include a plurality of first joints in the target user image and position information of each first joint, which may be understood as: the camera device captures a user image of a user to which the plurality of first joint points may belong. The second human body joint detection result includes a plurality of second joints in the target video image and position information of each second joint. The plurality of second joint points belong to the user in the target video image. The specific implementation manner of the electronic device constructing the user posture evaluation parameter set according to the first human body joint point detection result and the second human body joint point detection result may be: counting the number of the first joint points and the number of the second joint points; comparing the number of the plurality of first joint points with the number of the plurality of second joint points to obtain a first comparison result, wherein the first comparison result indicates a matching degree between the number of the plurality of first key points and the number of the plurality of second joint points; specifically, the electronic device may determine whether the number of the plurality of first key points is the same as the number of the plurality of second joint points, and if the number of the plurality of first key points is the same as the number of the plurality of second joint points, which indicates that the user in the target user image matches the user posture of the user in the target video image, determine that a matching degree between the number of the plurality of first key points and the number of the plurality of second joint points is a first preset value; and if the number of the first key points is different from the number of the second joint points, which indicates that the user posture of the user in the target user image is different from that of the user in the target video image, determining that the matching degree between the number of the first key points and the number of the second joint points is a second preset value. The reason why the number of the plurality of first key points is different from the number of the plurality of second joint points may be: the user action of the user is not standard, so that a small number of joint points cannot be identified. For example, a user image acquired 5 th second of the user image; the video image matched with the user image is the video image displayed in the 5 th second, but the user image acquired in the 5 th second comprises a stooping action, the video image displayed in the 5 th second comprises a stretching action, the user image comprises 3 joint points, the video image displayed in the 5 th second can comprise 6 joint points, and the user action is not standard compared with the standard prompting action in the video image. The first preset value and the second preset value can be set according to requirements. Optionally, a first preset value, a second preset value, a third preset value, and so on may be set according to a quantity difference between the quantity of the plurality of first keypoints and the quantity of the plurality of second joint points. For example, the number difference is 1, the matching degree between the number of the first key points and the number of the second joint points is a first preset value, the number difference is 2, the matching degree between the number of the first key points and the number of the second joint points is a second preset value, the number difference is 3, and the matching degree between the number of the first key points and the number of the second joint points is a third preset value.
The electronic device can compare the position information of each first joint point with the position information of a second joint point corresponding to each first joint point to obtain a second comparison result, and the second comparison result indicates the matching degree between the first joint point and the position of the second joint point corresponding to the first joint point; in a specific implementation, the position information may include position coordinates, and for a target joint point in the plurality of first joint points, the electronic device may compare the position coordinates of the target joint point with position coordinates of a second joint point corresponding to the target joint point to obtain a second comparison result. Wherein, the second joint point corresponding to the target joint point is: the target joint point and the second joint point belong to the same type of joint point. For example, the target joint point is a wrist joint point, and the second joint point corresponding to the target joint point is also a wrist joint point. And comparing the position coordinates of the target joint point with the position coordinates of a second joint point corresponding to the target joint point, and if the two position coordinates are the same, determining that the positions of the target joint point and the second joint point corresponding to the target joint point are the same, and determining the matching degree of the target joint point and the second joint point corresponding to the target joint point as a fourth preset value. If the two position coordinates are different, the position between the target joint point and a second joint point corresponding to the target joint point is different; and determining the matching degree of the target joint point and a second joint point corresponding to the target joint point as a fifth preset value. Optionally, the target distance may be calculated according to the two position coordinates, the smaller the distance is, the higher the matching degree of the second joint point corresponding to the target joint point is, and the matching degree corresponding to the target distance is determined according to the corresponding relationship between the reference matching degree and the reference distance, that is, the matching degree of the second joint point corresponding to the target joint point is obtained. It should be noted that any first joint point may refer to the target joint point to calculate the matching degree between the target joint point and the corresponding second joint point. When the matching degrees between all the first joint points and the corresponding second joint points are obtained, the matching degrees between all the first joint points and the corresponding second joint points may be subjected to an average operation or a weighted average operation to obtain a second comparison result.
After determining the first and second comparison results, a user gesture criterion is determined based on the first and second comparison results. Wherein the first comparison result may include a degree of matching between the number of the plurality of first keypoints and the number of the plurality of second joint points; the second comparison result may include a degree of matching between each joint point and the corresponding second joint point position; as an implementation manner, the electronic device may calculate an average value according to the first comparison result and the second comparison result, and obtain the user gesture standard degree. As another implementation, the electronic device may perform weighted summation on the first comparison result to obtain a first weighted value; and the second comparison result is weighted and summed to obtain a second weighted value. And calculating the average of the first weighted value and the second weighted value to obtain the user posture standard degree, and then constructing a user posture evaluation parameter set comprising the user posture standard degree.
In one embodiment, the user images captured in the first time range include a plurality of user images, and the video images matching the user images captured in the first time range include a plurality of frame video images, for example, the plurality of user images are captured at a plurality of time points in the first time range, and the plurality of frame video images are also displayed at a plurality of time points in the same first time range, and each user image can be matched in the plurality of frame video images. For example, the first time range is "5 th second to 6 th second", 2 user images are captured by the camera device in the 5 th and 6 th seconds of the first time range, respectively, and 2 frame video images are displayed in the 5 th and 6 th seconds of the first time range, respectively. It can be understood that the video image displayed in the 5 th second is matched with the user image acquired in the 5 th second by the camera device; the video image displayed at the 6 th second is matched with the user image acquired by the camera device at the 6 th second. The specific implementation manner that the electronic device can construct the user posture evaluation parameter set according to the first human body joint point detection result and the second human body joint point detection result may be: drawing a first motion track of each joint point in the multiple joint points according to a first human body joint point detection result of each user image in the multiple user images; drawing a second motion track of each joint point in the plurality of joint points in the multi-frame video images according to a second human body joint point detection result of each frame of video image in the multi-frame video images; matching a first motion track of each of a plurality of joint points in each frame of user image with a second motion track of each of a plurality of joint points in each frame of video image to obtain motion track similarity, and determining user posture beautification according to the motion track similarity; constructing a posture evaluation parameter set comprising the user posture grace degree; the first human body joint point detection result comprises position coordinates of a plurality of joint points.
As an embodiment, the number of the user images and the number of the video frame images between the plurality of joint points in the plurality of user images and the plurality of joint points in the multi-frame video image are the same, and the types of the joint points are the same, for example, 2 user images are respectively the user image acquired at the 5 th second and the user image acquired at the 6 th second; the video images are 2, and are respectively the video image displayed in the 5 th second, the video image displayed in the 6 th second, the multiple joint points included in the user image acquired in the 5 th second and the multiple joint points included in the video image displayed in the 5 th second are the same in number and the same in type. In this case, for a target joint point of multiple joint points in multiple user images, the computer may connect the position coordinates of the target joint point in each user image to obtain a first motion trajectory of the target joint point, and similarly, may draw a second motion trajectory of the target joint point of the multiple joint points according to a second human body joint point detection result of each frame of video images in multiple frame of video images. Connecting the position coordinates of the target joint point in the user image acquired in the 5 th second and the user image acquired in the 6 th second to obtain a first motion track of the target joint point; and connecting the position coordinates of the target joint point in the video image displayed in the 5 th second and the video image displayed in the 6 th second to obtain a second motion track of the target joint point. The first motion trail and the second motion trail of each joint point can be obtained in the above manner. The first motion trail of each joint point can see whether the user does the action and is not consistent, and the graceful degree of the user posture can be determined.
In another implementation, the number of the user images and the number of the video images are the same between the plurality of joint points in the plurality of user images and the plurality of joint points in the plurality of frames of video images, but the types of the joint points are different. This may be due to the fact that the user image comprises a user action that is not consistent with the standard prompt action in the video image. For example, the user images are 2, namely, the user image acquired in the 5 th second and the user image acquired in the 6 th second; the video images are 2, and are respectively the video image displayed at the 5 th second and the video image displayed at the 6 th second, but the user image acquired at the 5 th second and the user image acquired at the 6 th second both include a stooping action, while the user image displayed at the 5 th second includes a stooping action, the user image displayed at the 6 th second includes a stretching action, at this time, the user images both include 3 joint points, the video image displayed at the 5 th second may include 3 joint points and the video image displayed at the 6 th second both include 6 joint points, wherein the 3 joint points belong to the 6 joint points. At this time, the electronic device may obtain, with reference to the above manner, a first motion trajectory that draws each of the 3 joint points with the 3 joint points included in the user image, and may generate a second motion trajectory of each of the 3 joint points according to the video image, where the remaining 3 joint points may not form a corresponding motion trajectory, and the similarity may be set to be 0 directly. In the embodiment of the present application, when the types of the joint points included in the user images in the plurality of user images may be different from each other, or may be the same as each other, in this case, the plurality of user images may be required to connect the position coordinates of each of the joint points. Similarly, in a plurality of frames of video images, the position coordinates of each joint point may be connected.
Then, the electronic device matches the first motion trajectory of each of the plurality of joint points in each frame of the user image with the second motion trajectory of each of the plurality of joint points in each frame of the video image, and the specific implementation manner of obtaining the motion trajectory similarity is as follows: the electronic equipment can determine the graceful degree of the user posture according to the motion track similarity between the first motion track of each joint point in each user image and the second motion tracks of various joint points in each frame of video image. Specifically, the electronic device may calculate a reference motion trajectory similarity between the first motion trajectory and the second motion trajectory of each joint point, and may determine the motion trajectory similarity based on the reference motion trajectory similarity obtained for each joint point. Specifically, the reference motion trajectory similarity obtained for each joint point can be averaged to obtain the motion trajectory similarity; or, weighting the reference motion track similarity obtained by each joint point, and then averaging to obtain the motion track similarity. In one embodiment, the electronic device may calculate the motion trajectory similarity between the first motion trajectory of each joint point and the second motion trajectory of the joint point using a similarity algorithm, which may be an LCSS, DTW, or the like. Then, the specific implementation manner of determining the reference user posture beautification degree of each joint point based on the motion trajectory similarity obtained by each joint point may be: setting the corresponding relation between the user posture grace degree and the similarity, wherein the similarity is 10-29, and the user posture grace degree is 20; the similarity is 30-49, the user pose elegance is 60, and the like. The electronic device may determine the user posture beautifulness from the correspondence between the user posture beautifulness and the similarity according to the motion trajectory similarity. Then, a posture evaluation parameter set including the user posture beautifulness is constructed.
It should be noted that a playing mode of any video clip on the first interface, a playing mode of an animation matched with any video clip, a scoring result corresponding to each scoring time point determined in any video clip, and the like can be referred to the specific implementation mode related in the target video clip, and are not described herein again.
In one embodiment, as shown in fig. 3c, after the target video clip is played, the electronic device may display a next video clip of the target video clip on the first interface, and during playing of the next video clip, obtain a matching animation that matches the next video clip, and play the matching animation in the first interface; and when each scoring time point in at least one scoring time point included in the matching animation is reached, displaying a scoring result corresponding to the scoring time point on the first interface. The next video segment here may be a video segment that is played in chronological order, or the next video segment may also be user-selected.
s 13: and generating a first scoring result corresponding to the target scoring time point according to the posture evaluation parameter set. Wherein the set of user posture-assessment parameters may include at least one of: the standard degree of the user posture, the skill mastering degree of the user and the graceful degree of the user posture.
After the user posture evaluation parameter set is constructed, the electronic device may generate a first scoring result corresponding to the target scoring time point according to the posture evaluation parameter set, where the first scoring result may be: and carrying out average operation on at least one posture evaluation parameter in the posture evaluation parameter set to obtain a first scoring result corresponding to the target scoring time point. Specifically, a posture evaluation parameter set is constructed for the target time score points. The electronic equipment carries out average operation on the values of the posture evaluation parameters in the posture evaluation parameter set constructed at the target scoring time point to obtain a first scoring result corresponding to the target scoring time point. Wherein the target scoring time point is any one of the at least one scoring time point. For example, the posture evaluation parameter set constructed at the scoring time point 1 includes a user posture goodness (e.g., the value of the user posture goodness is 95) and a user posture standard degree (e.g., the value of the user posture standard degree is 95), and the electronic device may average the user posture goodness and the user posture standard degree to obtain a first scoring result corresponding to the scoring time point 1, that is, (95+95)/2 ═ 95. Optionally, the generating, by the electronic device, a first scoring result corresponding to the target scoring time point according to the posture evaluation parameter set may be: and weighting at least one posture evaluation parameter in the posture evaluation parameter set, and carrying out average operation after weighting to obtain a first scoring result corresponding to the target scoring time point.
S205: after the target video is played, determining a comprehensive scoring result according to the scoring result corresponding to each scoring time point configured for each animation in a plurality of animations corresponding to the target video; and displaying the comprehensive grading result on the first interface, wherein the plurality of animations corresponding to the target video comprise animations matched with each video clip in the plurality of video clips.
In one embodiment, as can be seen from the foregoing, the target video includes a plurality of video clips, each of which has a matching animation, so that after the target video is played, the electronic device can determine, according to the scoring result corresponding to each scoring time point configured for each animation in the plurality of animations, a comprehensive scoring result of the user in following the movement of the target video. As an implementation manner, the electronic device may perform average operation on the scoring results corresponding to each scoring time point of each animation configuration to obtain a comprehensive scoring result. As another implementation manner, the electronic device may perform weighting processing on the scoring results corresponding to each scoring time point configured for each animation, and perform average operation on the weighting processing results to obtain a comprehensive scoring result.
After obtaining the composite scoring result, the electronic device may display the composite scoring result on the first interface or the second interface. When the composite rating result is displayed on the first interface, the composite rating result may be displayed at any position in the first interface. For example, the display may be in a middle position of the first interface. Alternatively, the electronic device may display the composite scoring result in a floating window in the first interface. When the comprehensive scoring result is displayed on the second interface, the electronic equipment is switched from the first interface to the second interface after the comprehensive scoring result is obtained, and then the comprehensive scoring result is displayed on the second interface. Displaying the comprehensive scoring result at any position in a second interface; alternatively, the electronic device may display the composite scoring result in a floating window in the second interface.
In this embodiment, the electronic device may play the target video on the first interface; during the process of playing a target video clip of a target video, playing a target animation matched with the target video clip on a first interface, wherein the target video clip is any one of a plurality of video clips included in the target video, and the target animation is configured with at least one scoring time point; during the playing of the target video clip, calling a camera device to acquire a user image, and displaying the user image on a first interface; determining a scoring result corresponding to each scoring time point according to a user image collected in a first time range before each scoring time point and a video image matched with the user image collected in the first time range, and displaying the scoring result corresponding to each scoring time point on a first interface when each scoring time point arrives; the video image is contained in the target video segment; and after the target video is played, determining a comprehensive scoring result according to the scoring result corresponding to each scoring time point configured for each animation in the plurality of animations corresponding to the target video, and displaying the comprehensive scoring result on a first interface. By the evaluation determination method provided by the embodiment of the application, the corresponding scoring result can be displayed when each scoring time point in the animation is reached, the scoring result can be displayed in a short time, so that the interestingness of a user is increased, meanwhile, the scoring can be performed according to the user image collected in the first time range and the video image matched with the user image, the scoring of the user in a period of time can also be obtained, the user image does not need to be continuously obtained and compared with the video image matched with the user image, the power consumption of equipment can be reduced, and resources required by image processing are saved.
Based on the description of the above-mentioned embodiment of the score determining method, the embodiment of the present application further discloses a score determining device, which may be a computer program (including program code) running in the above-mentioned electronic device. The score determining means may perform the method shown in fig. 2. Referring to fig. 4, the score determining apparatus may operate as follows:
the processing unit 401 is configured to play a target video on a first interface;
the processing unit 401 is further configured to, during playing of a target video clip of the target video, play a target animation matched with the target video clip on the first interface, where the target video clip is any one of a plurality of video clips included in the target video, and the target animation is configured with at least one scoring time point;
the processing unit 401 is further configured to invoke a camera device to collect a user image during the playing of the target video clip, and display the user image on the first interface;
the processing unit 401 is further configured to determine, according to the user image acquired in a first time range before each scoring time point and the video image matched with the user image acquired in the first time range, a scoring result corresponding to each scoring time point;
a display unit 402, configured to display, on the first interface, a scoring result corresponding to each scoring time point when the time point reaches each scoring time point; the video image is included in the target video segment;
the processing unit 401 is further configured to determine, after the target video is played, a comprehensive scoring result according to a scoring result corresponding to each scoring time point configured for each of multiple animations corresponding to the target video; the plurality of animations corresponding to the target video comprise an animation matched with each video clip in the plurality of video clips;
the display unit 402 is further configured to display a comprehensive scoring result on the first interface.
In an embodiment, when determining the scoring result corresponding to each scoring time point according to the user image acquired in the first time range before each scoring time point and the video image matched with the user image acquired in the first time range, the processing unit 401 may specifically be configured to:
for a target scoring time point in the at least one scoring time point, determining a first scoring result corresponding to the target scoring time point according to a user image acquired in a first time range before the target scoring time point and a video image matched with the user image acquired in the first time range;
if the first scoring result is smaller than or equal to a scoring result threshold value, determining a second scoring result corresponding to the target scoring time point according to the user image acquired in a second time range before the target scoring time point and the video image matched with the user image acquired in the second time range, wherein the second time range comprises a first time range;
and determining a scoring result corresponding to the target scoring time point based on the second scoring result and the first scoring result.
In an embodiment, before the target video is played on the first interface, the processing unit 401 is further configured to:
acquiring an initial frame rate of the target video and a frame rate of a user image acquired by the camera device;
when the initial frame rate of the target video is not consistent with the frame rate of the user image acquired by the camera device, updating the initial frame rate of the target video to the frame rate of the user image acquired by the camera device to obtain the updated frame rate of the target video;
and determining the updated frame rate of the target video as the frame rate of the target video.
In an embodiment, when, for a target scoring time point of the at least one scoring time point, the processing unit 401 determines a first scoring result corresponding to the target scoring time point according to a user image acquired in a first time range before the target scoring time point and a video image matched with the user image acquired in the first time range, it may specifically be configured to:
aiming at a target scoring time point in the at least one scoring time point, carrying out human body joint point detection on a user image collected in a first time range before the target scoring time point to obtain a first human body joint point detection result of the user image;
carrying out human body joint point detection on the video image matched with the user image collected in the first time range to obtain a second human body joint point detection result of the video image matched with the user image collected in the first time range;
constructing a user posture evaluation parameter set according to the first human body joint point detection result and the second human body joint point detection result, wherein the user posture evaluation parameter set comprises at least one posture evaluation parameter;
and generating a first scoring result corresponding to the target scoring time point according to the posture evaluation parameter set.
In one embodiment, the user images captured in the first time range comprise target user images, and the video images matching the user images captured in the first time range comprise target video images matching the target user images; the first human body joint point detection result includes a plurality of first joint points in the target user image and position information of each first joint point, the second human body joint point detection result includes a plurality of second joint points in the target video image and position information of each second joint point, and when the processing unit 401 constructs the user posture evaluation parameter set according to the first human body joint point detection result and the second human body joint point detection result, it may specifically be configured to:
counting the number of the first joint points and the number of the second joint points;
comparing the number of the first joint points with the number of the second joint points to obtain a first comparison result, wherein the first comparison result indicates the matching degree between the number of the first key points and the number of the second joint points;
comparing the position information of each first joint point with the position information of a second joint point corresponding to the first joint point to obtain a second comparison result, wherein the second comparison result indicates the matching degree between the first joint point and the position of the second joint point corresponding to the first joint point;
determining the user posture standard degree according to the first comparison result and the second comparison result;
and constructing a user posture evaluation parameter set comprising the user posture standard degree.
In an embodiment, the user images collected in the first time range include a plurality of user images, the video images matched with the user images collected in the first time range include a plurality of frames of video images, and the processing unit 401, when constructing the user posture evaluation parameter set according to the first human joint detection result and the second human joint detection result, is specifically configured to:
drawing a first motion track of each joint point in multiple joint points in the multiple user images according to a first human body joint point detection result of each user image in the multiple user images;
drawing a second motion track of each joint point in the multiple joint points in the multiple frames of video images according to a second human body joint point detection result of each frame of video images in the multiple frames of video images;
matching the first motion track of each joint point in the multiple joint points in each user image with the second motion track of each joint point in the multiple joint points in each frame of video image to obtain motion track similarity;
determining the graceful degree of the user posture according to the motion trail similarity;
constructing a pose evaluation parameter set including the user pose elegance.
In one embodiment, the display unit 402 is further configured to: displaying a second interface, the second interface comprising a plurality of video segments of a target video; displaying a fourth interface in response to an animation configuration operation on a target video clip in the plurality of video clips, wherein the fourth interface comprises a candidate animation set;
the processing unit 401 is further configured to, when an animation selection operation for the candidate animation set is detected, determine a selected animation as a target animation; and responding to the grading time configuration operation of the target animation, and configuring at least one grading time point for the target animation.
In one embodiment, when each scoring time point of the at least one scoring time point arrives, before the first interface displays the scoring result corresponding to each scoring time point, the processing unit 401 is further configured to:
when receiving an animation interrupt signal, pausing playing the target animation, and recording the playing time length of the target animation;
and when receiving an animation starting signal, determining a playing position based on the played time length, and continuously playing the target animation by taking the playing position as a starting point.
It can be understood that each functional unit of the score determining apparatus of this embodiment can be specifically implemented according to the method in the foregoing method embodiment fig. 2, and the specific implementation process thereof can refer to the related description of the foregoing method embodiment fig. 2, which is not described herein again.
Further, please refer to fig. 5, where fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device in the embodiment corresponding to fig. 2 may be the electronic device shown in fig. 5. As shown in fig. 5, the electronic device may include: the processor 501 and the memory 504, optionally the electronic device may further comprise an input device 502 and an output device 503, optionally the electronic device may further comprise a camera. The processor 501, the input device 502, the output device 503, and the memory 504 are connected by a bus 505. The memory 504 is used to store a computer program comprising program instructions and the processor 501 is used to execute the program instructions stored by the memory 504.
In the embodiment of the present application, the processor 501 executes the executable program code in the memory 504 to perform the following operations:
playing the target video on the first interface;
during playing of a target video clip of the target video, playing a target animation matched with the target video clip on the first interface, wherein the target video clip is any one of a plurality of video clips included in the target video, and the target animation is configured with at least one scoring time point;
during the period of playing the target video clip, calling a camera device to collect a user image, and displaying the user image on the first interface; the frame rate of the target video is consistent with the frame rate of the user image acquired by the camera device;
determining a scoring result corresponding to each scoring time point according to a user image collected in a first time range before each scoring time point and a video image matched with the user image collected in the first time range, and displaying the scoring result corresponding to each scoring time point on the first interface when each scoring time point arrives; the video image is included in the target video segment;
and after the target video is played, determining a comprehensive scoring result according to the scoring result corresponding to each scoring time point configured for each animation in the plurality of animations corresponding to the target video, and displaying the comprehensive scoring result on the first interface, wherein the plurality of animations corresponding to the target video comprise animations matched with each video clip in the plurality of video clips.
In one embodiment, the processor 501, when determining the scoring result corresponding to each scoring time point according to the user image captured in the first time range before each scoring time point and the video image matched with the user image captured in the first time range, may specifically be configured to:
for a target scoring time point in the at least one scoring time point, determining a first scoring result corresponding to the target scoring time point according to a user image acquired in a first time range before the target scoring time point and a video image matched with the user image acquired in the first time range;
if the first scoring result is smaller than or equal to a scoring result threshold value, determining a second scoring result corresponding to the target scoring time point according to the user image acquired in a second time range before the target scoring time point and the video image matched with the user image acquired in the second time range, wherein the second time range comprises a first time range;
and determining a scoring result corresponding to the target scoring time point based on the second scoring result and the first scoring result.
In one embodiment, before the first interface plays the target video, the processor 501 is further configured to:
acquiring an initial frame rate of the target video and a frame rate of a user image acquired by the camera device;
when the initial frame rate of the target video is not consistent with the frame rate of the user image acquired by the camera device, updating the initial frame rate of the target video to the frame rate of the user image acquired by the camera device to obtain the updated frame rate of the target video;
and determining the updated frame rate of the target video as the frame rate of the target video.
In one embodiment, when determining, for a target scoring time point of the at least one scoring time point, a first scoring result corresponding to the target scoring time point according to a user image acquired in a first time range before the target scoring time point and a video image matched with the user image acquired in the first time range, the processor 501 may be specifically configured to:
aiming at a target scoring time point in the at least one scoring time point, carrying out human body joint point detection on a user image acquired in a first time range before the target scoring time point to obtain a first human body joint point detection result of the user image acquired in the first time range;
carrying out human body joint point detection on the video image matched with the user image collected in the first time range to obtain a second human body joint point detection result of the video image matched with the user image collected in the first time range;
constructing a user posture evaluation parameter set according to the first human body joint point detection result and the second human body joint point detection result, wherein the user posture evaluation parameter set comprises at least one posture evaluation parameter;
and generating a first scoring result corresponding to the target scoring time point according to the posture evaluation parameter set.
In one embodiment, the captured user images in the first time range comprise target user images, and the video images matching the captured user images in the first time range comprise target video images matching the target user images; the first human body joint point detection result includes a plurality of first joint points in the target user image and position information of each first joint point, the second human body joint point detection result includes a plurality of second joint points in the target video image and position information of each second joint point, and when the processor 501 constructs the user posture evaluation parameter set according to the first human body joint point detection result and the second human body joint point detection result, the processor may specifically be configured to:
counting the number of the first joint points and the number of the second joint points;
comparing the number of the first joint points with the number of the second joint points to obtain a first comparison result, wherein the first comparison result indicates the matching degree between the number of the first key points and the number of the second joint points;
comparing the position information of each first joint point with the position information of a second joint point corresponding to the first joint point to obtain a second comparison result, wherein the second comparison result indicates the matching degree between the first joint point and the position of the second joint point corresponding to the first joint point;
determining the user posture standard degree according to the first comparison result and the second comparison result;
and constructing a user posture evaluation parameter set comprising the user posture standard degree.
In one embodiment, the user images collected in the first time range include a plurality of user images, the video images matched with the user images collected in the first time range include a plurality of frames of video images, and the processor 501, when constructing the user posture evaluation parameter set according to the first human joint detection result and the second human joint detection result, is specifically configured to:
drawing a first motion track of each joint point in multiple joint points in the multiple user images according to a first human body joint point detection result of each user image in the multiple user images;
drawing a second motion track of each joint point in the multiple joint points in the multiple frames of video images according to a second human body joint point detection result of each frame of video images in the multiple frames of video images;
matching the first motion track of each joint point in the multiple joint points in each user image with the second motion track of each joint point in the multiple joint points in each frame of video image to obtain motion track similarity;
determining the graceful degree of the user posture according to the motion trail similarity;
constructing a pose evaluation parameter set including the user pose elegance.
In one embodiment, the processor 501 is further configured to:
displaying a second interface, the second interface comprising a plurality of video segments of a target video;
displaying a fourth interface in response to an animation configuration operation on a target video clip in the plurality of video clips, wherein the fourth interface comprises a candidate animation set;
when an animation selection operation for the candidate animation set is detected, determining the selected animation as a target animation;
and responding to the grading time configuration operation of the target animation, and configuring at least one grading time point for the target animation.
In one embodiment, when each scoring time point of the at least one scoring time point arrives, before the first interface displays the scoring result corresponding to each scoring time point, the processor is further configured to:
when receiving an animation interrupt signal, pausing playing the target animation, and recording the playing time length of the target animation;
and when receiving an animation starting signal, determining a playing position based on the played time length, and continuously playing the target animation by taking the playing position as a starting point.
It should be understood that, in the embodiment of the present Application, the Processor 501 may be a Central Processing Unit (CPU), and the Processor 501 may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field-Programmable Gate arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 504 may include a read-only memory and a random access memory, and provides instructions and data to the processor 501. A portion of the memory 504 may also include non-volatile random access memory.
In a specific implementation, the processor 501, the input device 502, the output device 503, and the memory 504 described in this embodiment of the present application may perform the implementation described in all the embodiments, or may also perform the implementation described in the apparatus described above, and no further description is provided herein.
A computer-readable storage medium is provided in an embodiment of the present application, and stores a computer program, where the computer program includes program instructions, and the program instructions, when executed by a processor, can perform the steps performed in all the above embodiments.
Embodiments of the present application further provide a computer program product or a computer program, where the computer program product or the computer program includes computer instructions, and the computer instructions are stored in a computer-readable storage medium, and when the computer instructions are executed by a processor of an electronic device, the computer instructions perform the methods in all the embodiments described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like. It is emphasized that, in order to further ensure the privacy and security of the data, the scoring results mentioned above may also be stored in a node of a blockchain. The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism and an encryption algorithm. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
While the invention has been described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A score determination method, comprising:
playing the target video on the first interface;
during playing of a target video clip of the target video, playing a target animation matched with the target video clip on the first interface, wherein the target video clip is any one of a plurality of video clips included in the target video, and the target animation is configured with at least one scoring time point;
during the period of playing the target video clip, calling a camera device to collect a user image, and displaying the user image on the first interface;
determining a scoring result corresponding to each scoring time point according to a user image collected in a first time range before each scoring time point and a video image matched with the user image collected in the first time range, and displaying the scoring result corresponding to each scoring time point on the first interface when each scoring time point arrives, wherein the video image is contained in the target video clip;
and after the target video is played, determining a comprehensive scoring result according to the scoring result corresponding to each scoring time point configured for each animation in the plurality of animations corresponding to the target video, and displaying the comprehensive scoring result on the first interface, wherein the plurality of animations corresponding to the target video comprise animations matched with each video clip in the plurality of video clips.
2. The method of claim 1, wherein determining the scoring result corresponding to each scoring time point according to the user image collected in the first time range before each scoring time point and the video image matched with the user image collected in the first time range comprises:
for a target scoring time point in the at least one scoring time point, determining a first scoring result corresponding to the target scoring time point according to a user image acquired in a first time range before the target scoring time point and a video image matched with the user image acquired in the first time range;
if the first scoring result is smaller than or equal to a scoring result threshold value, determining a second scoring result corresponding to the target scoring time point according to the user image acquired in a second time range before the target scoring time point and the video image matched with the user image acquired in the second time range, wherein the second time range comprises a first time range;
and determining a scoring result corresponding to the target scoring time point based on the second scoring result and the first scoring result.
3. The method of claim 1, wherein before the first interface plays the target video, the method further comprises:
acquiring an initial frame rate of the target video and a frame rate of a user image acquired by the camera device;
when the initial frame rate of the target video is not consistent with the frame rate of the user image acquired by the camera device, updating the initial frame rate of the target video to the frame rate of the user image acquired by the camera device to obtain the updated frame rate of the target video;
and determining the updated frame rate of the target video as the frame rate of the target video.
4. The method of claim 2, wherein the determining, for a target scoring time point of the at least one scoring time point, a first scoring result corresponding to the target scoring time point from a user image captured within a first time range prior to the target scoring time point and a video image matching the user image captured within the first time range comprises:
aiming at a target scoring time point in the at least one scoring time point, carrying out human body joint point detection on a user image acquired in a first time range before the target scoring time point to obtain a first human body joint point detection result of the user image acquired in the first time range;
carrying out human body joint point detection on the video image matched with the user image collected in the first time range to obtain a second human body joint point detection result of the video image matched with the user image collected in the first time range;
constructing a user posture evaluation parameter set according to the first human body joint point detection result and the second human body joint point detection result, wherein the user posture evaluation parameter set comprises at least one posture evaluation parameter;
and generating a first scoring result corresponding to the target scoring time point according to the posture evaluation parameter set.
5. The method of claim 4, wherein the captured user image in the first time range comprises a target user image, and wherein the video image matching the captured user image in the first time range comprises a target video image matching the target user image; the first human body joint point detection result comprises a plurality of first joint points in the target user image and position information of each first joint point, the second human body joint point detection result comprises a plurality of second joint points in the target video image and position information of each second joint point, and the construction of the user posture evaluation parameter set according to the first human body joint point detection result and the second human body joint point detection result comprises the following steps:
counting the number of the first joint points and the number of the second joint points;
comparing the number of the first joint points with the number of the second joint points to obtain a first comparison result, wherein the first comparison result indicates the matching degree between the number of the first key points and the number of the second joint points;
comparing the position information of each first joint point with the position information of a second joint point corresponding to the first joint point to obtain a second comparison result, wherein the second comparison result indicates the matching degree between the first joint point and the position of the second joint point corresponding to the first joint point;
determining the user posture standard degree according to the first comparison result and the second comparison result;
and constructing a user posture evaluation parameter set comprising the user posture standard degree.
6. The method of claim 4, wherein the user images captured within the first time range comprise a plurality of user images, wherein the video images matching the user images captured within the first time range comprise a plurality of frames of video images, and wherein constructing the set of user posture assessment parameters from the first and second human joint detection results comprises:
drawing a first motion track of each joint point in multiple joint points in the multiple user images according to a first human body joint point detection result of each user image in the multiple user images;
drawing a second motion track of each joint point in the multiple joint points in the multiple frames of video images according to a second human body joint point detection result of each frame of video images in the multiple frames of video images;
matching the first motion track of each joint point in the multiple joint points in each user image with the second motion track of each joint point in the multiple joint points in each frame of video image to obtain motion track similarity;
determining the graceful degree of the user posture according to the motion trail similarity;
constructing a pose evaluation parameter set including the user pose elegance.
7. The method of claim 1, further comprising:
displaying a second interface, the second interface comprising a plurality of video segments of a target video;
displaying a fourth interface in response to an animation configuration operation on a target video clip in the plurality of video clips, wherein the fourth interface comprises a candidate animation set;
when an animation selection operation for the candidate animation set is detected, determining the selected animation as a target animation;
and responding to the grading time configuration operation of the target animation, and configuring at least one grading time point for the target animation.
8. A score determination device, comprising:
the processing unit is used for playing the target video on the first interface;
the processing unit is further configured to play a target animation matched with a target video clip on the first interface during playing of the target video clip of the target video, where the target video clip is any one of a plurality of video clips included in the target video, and the target animation is configured with at least one scoring time point;
the processing unit is further used for calling a camera device to collect a user image during the playing of the target video clip, and displaying the user image on the first interface;
the processing unit is further used for determining a scoring result corresponding to each scoring time point according to the user image acquired in a first time range before each scoring time point and the video image matched with the user image acquired in the first time range;
the display unit is used for displaying the scoring result corresponding to each scoring time point on the first interface when each scoring time point arrives; the video image is included in the target video segment;
the processing unit is further configured to determine a comprehensive scoring result according to a scoring result corresponding to each scoring time point configured for each of multiple animations corresponding to the target video after the target video is played, where the multiple animations corresponding to the target video include an animation matched with each of the multiple video clips;
the display unit is further used for displaying the comprehensive scoring result on the first interface.
9. An electronic device, comprising:
a memory for storing a computer program;
a processor calling said computer program in said memory for executing the score determination method of any of claims 1-7.
10. A computer storage medium, characterized in that the computer storage medium stores a computer program comprising program instructions that, when executed by a processor, perform the score determination method of any one of claims 1 to 7.
CN202210145089.5A 2022-02-17 2022-02-17 Scoring determination method and device, electronic equipment and storage medium Pending CN114513694A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210145089.5A CN114513694A (en) 2022-02-17 2022-02-17 Scoring determination method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210145089.5A CN114513694A (en) 2022-02-17 2022-02-17 Scoring determination method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114513694A true CN114513694A (en) 2022-05-17

Family

ID=81551798

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210145089.5A Pending CN114513694A (en) 2022-02-17 2022-02-17 Scoring determination method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114513694A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114638831A (en) * 2022-05-18 2022-06-17 合肥宏晶半导体科技有限公司 Image analysis method and device
CN115273222A (en) * 2022-06-23 2022-11-01 武汉元淳传媒有限公司 Multimedia interaction analysis control management system based on artificial intelligence

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107920269A (en) * 2017-11-23 2018-04-17 乐蜜有限公司 Video generation method, device and electronic equipment
CN107968921A (en) * 2017-11-23 2018-04-27 乐蜜有限公司 Video generation method, device and electronic equipment
CN112399234A (en) * 2019-08-18 2021-02-23 聚好看科技股份有限公司 Interface display method and display equipment
CN112487940A (en) * 2020-11-26 2021-03-12 腾讯音乐娱乐科技(深圳)有限公司 Video classification method and device
CN113678137A (en) * 2019-08-18 2021-11-19 聚好看科技股份有限公司 Display device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107920269A (en) * 2017-11-23 2018-04-17 乐蜜有限公司 Video generation method, device and electronic equipment
CN107968921A (en) * 2017-11-23 2018-04-27 乐蜜有限公司 Video generation method, device and electronic equipment
CN112399234A (en) * 2019-08-18 2021-02-23 聚好看科技股份有限公司 Interface display method and display equipment
CN113678137A (en) * 2019-08-18 2021-11-19 聚好看科技股份有限公司 Display device
CN112487940A (en) * 2020-11-26 2021-03-12 腾讯音乐娱乐科技(深圳)有限公司 Video classification method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114638831A (en) * 2022-05-18 2022-06-17 合肥宏晶半导体科技有限公司 Image analysis method and device
CN114638831B (en) * 2022-05-18 2022-10-21 合肥宏晶半导体科技有限公司 Image analysis method and device
CN115273222A (en) * 2022-06-23 2022-11-01 武汉元淳传媒有限公司 Multimedia interaction analysis control management system based on artificial intelligence
CN115273222B (en) * 2022-06-23 2024-01-26 广东园众教育信息化服务有限公司 Multimedia interaction analysis control management system based on artificial intelligence

Similar Documents

Publication Publication Date Title
US10748376B2 (en) Real-time game tracking with a mobile device using artificial intelligence
CN108712661B (en) Live video processing method, device, equipment and storage medium
CN107786549B (en) Adding method, device, system and the computer-readable medium of audio file
JP5483899B2 (en) Information processing apparatus and information processing method
CN109525891B (en) Multi-user video special effect adding method and device, terminal equipment and storage medium
CN108525305B (en) Image processing method, image processing device, storage medium and electronic equipment
CN109819342A (en) Barrage contents controlling method, device, computer equipment and storage medium
CN114513694A (en) Scoring determination method and device, electronic equipment and storage medium
CN110298220B (en) Action video live broadcast method, system, electronic equipment and storage medium
WO2020263293A1 (en) Determining high-interest durations of gameplay sessions from user inputs
CN109325456A (en) Target identification method, device, target identification equipment and storage medium
WO2021098616A1 (en) Motion posture recognition method, motion posture recognition apparatus, terminal device and medium
US20240050803A1 (en) Video-based motion counting and analysis systems and methods for virtual fitness application
CN102222342A (en) Tracking method of human body motions and identification method thereof
CN112527113A (en) Method and apparatus for training gesture recognition and gesture recognition network, medium, and device
CN114972958B (en) Key point detection method, neural network training method, device and equipment
CN112364799A (en) Gesture recognition method and device
US11954869B2 (en) Motion recognition-based interaction method and recording medium
TW202303526A (en) Special effect display method, computer equipment and computer-readable storage medium
CN115131879B (en) Action evaluation method and device
CN115331314A (en) Exercise effect evaluation method and system based on APP screening function
CN111223549A (en) Mobile end system and method for disease prevention based on posture correction
CN116403285A (en) Action recognition method, device, electronic equipment and storage medium
CN113723306A (en) Push-up detection method, device and computer readable medium
CN116069157A (en) Virtual object display method, device, electronic equipment and readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination