CN112887782A - Image output method and device and electronic equipment - Google Patents

Image output method and device and electronic equipment Download PDF

Info

Publication number
CN112887782A
CN112887782A CN202110071255.7A CN202110071255A CN112887782A CN 112887782 A CN112887782 A CN 112887782A CN 202110071255 A CN202110071255 A CN 202110071255A CN 112887782 A CN112887782 A CN 112887782A
Authority
CN
China
Prior art keywords
recording
condition
application program
target
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110071255.7A
Other languages
Chinese (zh)
Inventor
潘进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110071255.7A priority Critical patent/CN112887782A/en
Publication of CN112887782A publication Critical patent/CN112887782A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4334Recording operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440236Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by media transcoding, e.g. video is transformed into a slideshow of still pictures, audio is converted into text
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4438Window management, e.g. event handling following interaction with the user interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4781Games

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses an image output method, an image output device and electronic equipment, and belongs to the technical field of communication. The method comprises the following steps: the method comprises the steps that in the process of running a target application program in a foreground, first characteristic information related to the target application program is obtained; recording an operation interface of the target application program under the condition that the first characteristic information meets a first preset condition; acquiring second characteristic information associated with the target application program in the recording process; under the condition that the second characteristic information meets a second preset condition, finishing recording the running interface; and outputting a plurality of continuous target images corresponding to the target application program according to the recording result. The method and the device can reduce the operation steps of the user, improve the recording efficiency of the motion picture and save the time of the user.

Description

Image output method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to an image output method and device and electronic equipment.
Background
With the popularity of electronic devices, more and more users play games using electronic devices to play off-hours.
In the process of playing games, users often like to share own dynamics through the network, for example, the "super god" moment in the games is made into a dynamic graph to be published in forums or friend circles so as to show personal styles.
At present, when a user records a video at a certain moment in a game process, the user needs to capture a video clip in the game through video capture software after the game is finished, and then release the video clip. This method is complicated and inefficient.
Disclosure of Invention
The embodiment of the application aims to provide an image output method, an image output device and electronic equipment, and the problems that operation is complex and efficiency is low when an animation of an operating application program is intercepted can be solved.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides an image output method, including:
the method comprises the steps that in the process of running a target application program in a foreground, first characteristic information related to the target application program is obtained;
recording an operation interface of the target application program under the condition that the first characteristic information meets a first preset condition;
acquiring second characteristic information associated with the target application program in the recording process;
under the condition that the second characteristic information meets a second preset condition, finishing recording the running interface;
and outputting a plurality of continuous target images corresponding to the target application program according to the recording result.
In a second aspect, an embodiment of the present application provides an image output apparatus, including:
the system comprises a first characteristic acquisition module, a second characteristic acquisition module and a first characteristic analysis module, wherein the first characteristic acquisition module is used for acquiring first characteristic information associated with a target application program in the process of foreground running of the target application program;
the running interface recording module is used for recording the running interface of the target application program under the condition that the first characteristic information meets a first preset condition;
the second characteristic acquisition module is used for acquiring second characteristic information associated with the target application program in the recording process;
the interface recording ending module is used for ending recording of the running interface under the condition that the second characteristic information meets a second preset condition;
and the target image output module is used for outputting a plurality of continuous target images corresponding to the target application program according to the recording result.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, the program or instructions implement the steps of the image output method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the image output method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the image output method according to the first aspect.
In the embodiment of the application, first characteristic information associated with a target application program is acquired in the process of running the target application program on a foreground, a running interface of the target application program is recorded under the condition that the first characteristic information meets a first preset condition, second characteristic information associated with the target application program is acquired in the recording process, the recording of the running interface is finished under the condition that the second characteristic information meets a second preset condition, and a plurality of continuous target images corresponding to the target application program are output according to the recording result. According to the method and the device, the starting point and the end point of recording of the running interface are determined by combining the characteristic information associated with the target application program, the target image can be recorded in the process that a user operates the target application program, the user does not need to make a later-stage motion picture, the operation steps of the user can be reduced, the efficiency of recording the motion picture is improved, and the time of the user is saved.
Drawings
Fig. 1 is a flowchart illustrating steps of an image output method according to an embodiment of the present application;
fig. 2 is a schematic diagram illustrating a mode for starting a motion picture making process according to an embodiment of the present application;
fig. 3 is a schematic diagram illustrating obtaining a touch frequency according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an image output apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of another electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The image output method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
Referring to fig. 1, a flowchart illustrating steps of an image output method provided in an embodiment of the present application is shown, and as shown in fig. 1, the image output method may specifically include the following steps:
step 101: in the process of running a target application program in a foreground, first characteristic information associated with the target application program is acquired.
The method and the device for recording the operation interface of the application program can be applied to recording the operation interface of the application program according to the characteristic information related to the operation application program so as to generate the scene of the motion picture.
The target application refers to an application that needs to perform interface recording to generate a motion picture and is currently operated in the foreground, and in this example, the target application may be a game application, a video application, or the like, and specifically, may be determined according to a business requirement, which is not limited in this embodiment.
The first characteristic information refers to characteristic information associated with an operating target application program, and in this embodiment, the first characteristic information may be user expression information of a user who operates the target application program and frequency information of a touch display screen when the user operates the target application program. The first characteristic information may also be sound information, such as game sound, generated by the target application program during the process of manipulating the target application program. The first feature information may also be sound information or the like uttered by a user who manipulates the target application program. Specifically, the method may be determined according to business requirements, and the embodiment is not limited thereto.
In the process of running the target application program in the foreground, first feature information associated with the target application program can be acquired in real time, for example, when the first feature information is user expression information and frequency information of a display screen controlled by a user, a front camera of a mobile phone can be started after the target application program is started, facial images of the user are acquired through the front camera to analyze and obtain facial expressions of the user, and the frequency of the display screen controlled by the user is detected through a sensor built in the mobile phone. When the first characteristic information is the sound information, the sound collection device built in the mobile phone can be started after the target application program is started, and the sound emitted by the target application program or the sound emitted by a user operating the target application program can be collected in real time through the sound collection device.
It should be understood that the above examples are only examples for better understanding of the technical solutions of the embodiments of the present application, and are not to be taken as the only limitation to the embodiments.
Of course, in this embodiment, the function of intelligently creating the GIF may be preset in the electronic device. After the application program capable of recording the motion picture is started, a popup window can be displayed on a display interface of the started application program, a text indicating whether to start the intelligent production GIF is displayed in the popup window, and two buttons of 'confirm' and 'no' are displayed in the popup window, as shown in fig. 2, after a user clicks the 'confirm' button, a process of acquiring the feature information associated with the target application program is started, and after the user clicks the 'no' button, the motion picture associated with the application program does not need to be automatically manufactured, so that the subsequent process of acquiring the feature information associated with the target application program does not need to be executed.
After the first feature information associated with the target application is obtained, step 102 is performed.
Step 102: and recording the running interface of the target application program under the condition that the first characteristic information meets a first preset condition.
The first preset condition is a condition preset by a service staff for judging whether to record the running interface of the target application program.
The first preset condition may be a facial expression condition, a touch frequency threshold condition, a sound condition, or the like, specifically, the first preset condition is associated with the first feature information, and a specific form of the first preset condition may be determined according to a business requirement, which is not limited in this embodiment. The specific form of the first preset condition will be described in detail in the following specific implementation, and this embodiment is not described herein again.
After obtaining the first feature information associated with the target application program, it may be determined whether the first feature information satisfies a first preset condition.
And when the first characteristic information is judged not to meet the first preset condition, continuing to execute the step of acquiring the first characteristic information.
When the first characteristic information is judged to meet the first preset condition, the recording function of the electronic equipment can be started to start video recording on the running interface of the target application program.
After the running interface of the target application program is recorded, step 103 is executed.
Step 103: and acquiring second characteristic information associated with the target application program in the recording process.
The second characteristic information refers to characteristic information associated with the running target application program, which is acquired in the process of recording the running interface of the target application program, and in this embodiment, the second characteristic information may be user expression information of a user who controls the target application program and frequency information of a touch display screen when the user controls the target application program. The second characteristic information may also be sound information, such as game sound, generated by the target application program during the process of manipulating the target application program. The second characteristic information may also be sound information or the like uttered by a user who manipulates the target application program. Specifically, the method may be determined according to business requirements, and the embodiment is not limited thereto.
In this embodiment, the first characteristic information and the second characteristic information are opposite, that is, the first characteristic information may be used to indicate that recording of the running interface of the target application program starts, and the second characteristic information may be used to indicate that recording of the running interface of the target application program ends. For example, the feature information is user expressions, when the first feature information is excited expression information, the second feature information is calm expression information, then when it is detected that the user expression of the user who controls the target application program is excited expression, recording of the running interface of the target application program is started, and in the interface recording process, when it is detected that the user expression of the user is calm expression, recording of the running interface is ended.
After the recording of the running interface of the target application program is started, second characteristic information associated with the target application program can be acquired in real time in the process of recording the running interface.
It can be understood that the obtaining manner of the second characteristic information is similar to that of the first characteristic information, and the embodiments of the present application are not described herein again.
In the recording process, after the second feature information associated with the target application is acquired, step 104 is executed.
Step 104: and under the condition that the second characteristic information meets a second preset condition, ending the recording of the running interface.
The second preset condition is a condition preset by service personnel for judging whether to finish recording the running interface of the target application program.
The second preset condition may be a facial expression condition, a touch frequency threshold condition, a sound condition, or the like, specifically, the second preset condition is associated with the second feature information, and a specific form of the second preset condition may be determined according to a business requirement, which is not limited in this embodiment. The specific form of the second preset condition will be described in detail in the following specific implementation, and this embodiment is not described herein again.
After obtaining the second feature information associated with the target application program, it may be determined whether the second feature information satisfies a second preset condition.
And under the condition that the second characteristic information is judged not to meet the second preset condition, continuing to execute the step of acquiring the second characteristic information.
And under the condition that the second characteristic information is judged to meet the second preset condition, the recording of the running interface is finished.
After the recording of the running interface is finished, step 105 is performed.
Step 105: and outputting a plurality of continuous target images corresponding to the target application program according to the recording result.
The plurality of continuous target images can be a video segment, or a dynamic image converted from the video segment.
After the recording of the running interface is finished, the video recorded in the time period from the beginning of recording and the end of recording can be obtained, the recorded video is used as a recording result, a plurality of continuous target images corresponding to the target application program can be generated according to the recording result, and the target images are output.
Of course, in this embodiment, after the video clip is recorded, a Gif editor pre-installed in the electronic device may be started to convert the video into an image.
Of course, in a specific implementation, the recorded video may also be converted into the target image in other manners, and specifically, the conversion manner may be determined according to business requirements, and the specific conversion manner is not limited in this embodiment.
According to the method and the device, the first characteristic information and the second characteristic information associated with the target application program are combined, the dynamic image of part of the running interface of the target application program can be automatically generated, the user does not need to make the dynamic image in the later period, the operation steps of the user can be reduced, the recording efficiency of the dynamic image is improved, and the time of the user is saved.
When the first feature information and the second feature information associated with the target application are user expressions and frequency information of user manipulation of the display screen, the scheme of the embodiment may be described in detail with reference to the following specific implementation manner.
In a specific implementation manner of the present application, the step 101 may include:
substep A1: acquiring first expression information of a user operating the target application program and a first touch frequency of the user touching a display screen.
In this embodiment, the first expression information refers to expression information of a user operating a target application program, which is acquired before a running interface of the target application program is recorded.
The first touch frequency refers to the frequency of the user operating the target application program for operating the display screen, which is obtained before the running interface of the target application program is recorded.
In the process of running the target application program in the foreground, the first expression information of a user operating the target application program and the first touch frequency of the user touching the display screen can be acquired in real time. As shown in fig. 3, the first touch frequency may include a sliding frequency and a clicking frequency, where the sliding frequency is a frequency of the left thumb of the user performing a sliding operation on the display screen, and the clicking frequency is a frequency of the right thumb of the user clicking the display screen.
After the first expression information and the first touch frequency are obtained, it is determined whether the first expression information and the first touch frequency satisfy a preset condition, and specifically, the following sub-step a2 is described in detail.
In a specific implementation manner of the present application, the step 102 may include:
substep A2: and recording the running interface of the target application program under the condition that the first expression information meets a first preset expression condition and the first touch frequency is greater than a first preset frequency.
The first preset expression condition is a condition which is preset by service personnel and is used for judging whether the expression of a user for controlling the target application program meets the recording of the running interface.
The first preset frequency is a frequency preset by a service staff and used for judging whether the frequency of the user for controlling the target application program to control the display screen meets the condition of running interface recording or not.
After the first expression information and the first touch frequency are obtained, whether the first expression information meets a first preset expression condition or not and a size relation between the first touch frequency and the first preset frequency can be judged.
And under the condition that the first expression information is determined not to meet the first preset expression condition and/or the first touch frequency is less than or equal to the first preset frequency, continuing to execute the step of acquiring the first expression information and the first touch frequency.
And recording the running interface of the target application program under the condition that the first expression information meets a first preset expression condition and the first touch frequency is greater than the first preset frequency.
In the process of recording the running interface of the target application program, the expression of the user and the touch frequency of the user touch display screen are obtained in real time, and whether the recording of the running interface is finished or not is determined according to the expression and the touch frequency of the user. Specifically, it can be described in detail in connection with the following sub-step a 3.
In a specific implementation manner of the present application, the step 103 may include:
substep A3: and in the recording process, second expression information of the user and a second touch frequency of the user touching the display screen are obtained.
The second expression information is user expression information of a user who controls the target application program, which is acquired in the process of recording the running interface of the target application program.
The second touch frequency is a frequency of acquiring a user touch display screen for controlling the target application program in a process of recording the running interface of the target application program.
In the process of recording the running interface of the target application program, second expression information of the user and second touch frequency of the user touch display screen can be acquired.
After the second expression information and the second touch frequency are obtained, a determination process of whether the second expression information and the second touch frequency satisfy a condition for ending the recording of the running interface may be specifically described in detail with reference to sub-step a4 described below.
In a specific implementation manner of the present application, the step 104 may include:
substep A4: and when the second expression information meets a second preset expression condition and the second touch frequency is less than or equal to the first preset frequency, ending the recording of the running interface.
The second preset expression condition is a condition preset by service personnel and used for judging whether the expression of the user who controls the target application program meets the condition for finishing recording the running interface.
After the second expression information and the second touch frequency are obtained, whether the second expression information meets a second preset expression condition or not and a size relation between the second touch frequency and the first preset frequency can be judged.
And under the condition that the second expression information does not meet a second preset expression condition and/or the second touch frequency is greater than the first preset frequency, continuing to execute the step of acquiring the second expression information and the second touch frequency.
And when the second expression information meets a second preset expression condition and the second touch frequency is less than or equal to the first preset frequency, ending the recording of the running interface.
After the recording of the running interface is finished, a video recorded in a time period from the start of the recording of the running interface to the end of the recording of the running interface can be acquired, and the recorded video is used as a recording result.
According to the method and the device, the user expression and the frequency of the user touch screen are combined to serve as the conditions for judging whether recording is started or finished on the running interface of the target application program, the motion picture of the target application program can be made in the running process of the target application program, manual operation of a user is not needed, the efficiency of making the motion picture is improved, the time of the user is saved, and the use experience of the user is improved.
In this embodiment, the first preset condition, the second preset condition, and the first preset frequency may also be updated, and specifically, the detailed description may be described in conjunction with the following specific implementation manner.
In another specific implementation manner of the present application, after the step 105, the method may further include:
step B1: a first input of the user is received.
In the present embodiment, the first input refers to an input performed by the user for determining whether or not to save the target image.
After generating the target image of the target application program, the user may be prompted which images are output, and then the first input of the user is received.
After receiving the first input by the user, step B2 is performed.
Step B2: and responding to the first input, and determining a first target image to be stored in the target images and a second target image to be deleted in the target images according to the input parameters of the first input.
The first target image is a target image which needs to be stored in the target image.
The second target image is a target image which needs to be deleted in the target image.
The input parameters refer to parameters of first input executed by a user on a target image, after the target image of a target application program is generated, the user can be prompted to acquire which target images and display the target images, and meanwhile, two buttons such as 'save' and 'delete' can be displayed at the relevant positions of the displayed target images, and at the moment, the input parameters of the first input correspond to the parameters of which button the user clicks.
Of course, the input parameter may also be other types of parameters, and specifically, may be determined according to business requirements, which is not limited in this embodiment.
After receiving the first input from the user, the steps B3, B4 and B5 may be performed in response to the first input and determining a first target image to be saved in the target images and a second target image to be deleted in the target images according to input parameters of the first input.
Step B3: and updating the first preset expression condition according to the first expression information corresponding to the first target image and the first expression information corresponding to the second target image.
Step B4: and updating the second preset expression condition according to the second expression information corresponding to the first target image and the second expression information corresponding to the second target image.
Step B5: and updating the first preset frequency according to the first touch frequency and the second touch frequency corresponding to the first target image and the first touch frequency and the second touch frequency corresponding to the second target image.
After the first target image and the second target image are determined, the first preset expression condition may be updated by combining the first expression information of the first target image and the first expression information of the second target image. And updating the second preset expression condition by combining the second expression information of the first target image and the second expression information of the second target image. Meanwhile, the value of the first preset frequency is updated by combining the first touch frequency and the second touch frequency of the first target image and the first touch frequency and the second touch frequency of the second target image.
Specifically, it is assumed that the initial values of the first preset expression condition, the second preset expression condition, and the first preset frequency are as shown in table 1 below:
table 1:
Figure BDA0002905863910000111
Figure BDA0002905863910000121
by continuously collecting the expression and operation frequency data of the user and combining the recognition degree (namely a confirmation result) of the user for collecting the GIF after finishing the game, the starting and ending conditions (the expression library and the frequency threshold) of the data model are optimized, and the GIF collection function can be more intelligent and closer to the real will of the user under the long-term collection optimization.
When the first characteristic information and the second characteristic information associated with the target application are sound information, the solution of the present embodiment may be described in detail with reference to the following specific implementation manner.
In another specific implementation manner of the present application, the step 101 may include:
substep C1: and acquiring first audio information of the target application program in the running process.
In this embodiment, the first audio information refers to first audio information of a target application acquired during a process of running the target application in the foreground.
It is understood that the First audio information may be audio of sound emitted by the target application, such as sound emitted during game playing of the user, such as a blood First blood, double killing double kill, etc., or audio of sound emitted by the user operating the target application, such as cynical sound after hero-killing of the opponent during game playing of the user, and specifically, may be determined according to business requirements, which is not limited by the embodiment.
In the process of running the target application program in the foreground, a sound collecting device in the electronic equipment can be started to acquire the first audio information in real time.
After the first audio information is obtained, it may be determined whether to enter a process of running interface recording in conjunction with the first audio information, and specifically, it may be described in detail in conjunction with sub-step C2 described below.
In another specific implementation manner of the present application, the step 102 may include:
substep C2: and recording the running interface under the condition that the first audio information meets a first preset audio condition.
The first preset audio condition refers to an audio condition preset by a service person for starting to run the interface recording.
After the first audio information is acquired, whether the first audio information meets a first preset audio condition may be determined.
And under the condition that the first audio information does not meet the first preset audio condition, continuing to execute the step of acquiring the first audio information.
And under the condition that the first audio information meets the first preset audio condition, video recording can be started on the running interface of the target application program.
In the recording process of the running interface, the audio information may be acquired in real time to end the recording of the running interface when the matching audio information is acquired, and specifically, the detailed description may be given in conjunction with sub-step C3 or sub-step C4 described below.
In another specific implementation manner of the present application, the step 103 may include:
substep C3: and acquiring second audio information of the target application program in the running process.
The second audio information refers to audio information preset by service personnel for ending recording of the running interface.
In this example, the second audio information may be audio information that is emitted by a running target application, or audio information that is emitted by a user who manipulates the target application, and the like, and specifically, the second audio information may be determined according to a service requirement, which is not limited in this embodiment.
In the recording process of the running interface of the target application program, second audio information of the target application program in the running process can be acquired in real time.
Substep C4: and timing from the recording of the operation interface to obtain the target time length.
The target duration refers to a duration preset by a service person and used for video recording on a target application program running interface, and the target duration may be 10s, 20s, and the like, specifically, a specific value of the target duration may be determined according to a service requirement, which is not limited in this embodiment.
And starting recording the running interface in time to obtain the target time length of the recorded running interface.
After the second audio information or the target time length is obtained, whether a condition for ending recording is satisfied may be determined in conjunction with the second audio information or the target time length, and in particular, it may be described in detail in conjunction with sub-step C5 described below.
In another specific implementation manner of the present application, the step 103 may include:
substep C5: and finishing recording the running interface under the condition that the second audio information meets a second preset audio condition or the target time length reaches a set time length.
The second preset audio condition is an audio condition preset by a service person for ending recording of the running interface.
When the condition for finishing recording the running interface is the audio condition, after the second audio information is acquired, whether the second audio information meets a second preset audio condition can be judged.
And under the condition that the second audio information does not meet the second preset audio condition, continuing the recording process of the running interface, and executing the step of acquiring the second audio information.
And under the condition that the second audio information meets a second preset audio condition, ending the recording process of the running interface, and acquiring the recorded video as a recording result.
When the recording condition of the operation interface is finished is the time length condition, whether the target time length reaches the set time length or not can be judged after the target time length is obtained.
And under the condition that the target time length does not reach the set time length, the process of recording the running interface of the target application program can be continuously executed, and the step of obtaining the target time length of the recorded running interface is executed.
And under the condition that the target time length reaches the set time length, ending the recording of the running interface, and acquiring the recorded video so as to take the recorded video as a recording result.
After the recording result is obtained, a target image of the target application program can be generated according to the recording result, and specifically, a Gif editor installed in the electronic device can be started to convert the video into the target image.
According to the method and the device, the starting time point of recording the running interface is determined by combining the audio information, and the recording condition of the running interface is finished, so that the purpose of automatically making the moving picture of part of the running interface of the target application program can be achieved, manual operation of a user is not needed, and the user experience is improved.
According to the image output method provided by the embodiment of the application, in the process of running the target application program on the foreground, first characteristic information associated with the target application program is obtained, the running interface of the target application program is recorded under the condition that the first characteristic information meets a first preset condition, second characteristic information associated with the target application program is obtained in the recording process, the recording of the running interface is finished under the condition that the second characteristic information meets a second preset condition, and a plurality of continuous target images corresponding to the target application program are output according to the recording result. According to the method and the device, the starting point and the end point of recording of the running interface are determined by combining the characteristic information associated with the target application program, the target image can be recorded in the process that a user operates the target application program, the user does not need to make a later-stage motion picture, the operation steps of the user can be reduced, the efficiency of recording the motion picture is improved, and the time of the user is saved.
It should be noted that, in the image output method provided in the embodiment of the present application, the execution subject may be an image output apparatus, or a control module in the image output apparatus for executing the image output method. The embodiment of the present application describes an image output apparatus provided in the embodiment of the present application by taking an image output apparatus as an example to execute an image output method.
Referring to fig. 4, a schematic structural diagram of an image output apparatus provided in an embodiment of the present application is shown, and as shown in fig. 4, the image output apparatus 400 may specifically include the following modules:
a first feature obtaining module 410, configured to obtain first feature information associated with a target application in a process of running the target application in a foreground;
the running interface recording module 420 is configured to record the running interface of the target application program when the first feature information meets a first preset condition;
a second characteristic obtaining module 430, configured to obtain second characteristic information associated with the target application program in a recording process;
an interface recording ending module 440, configured to end recording of the running interface when the second feature information meets a second preset condition;
and a target image output module 450, configured to output a plurality of consecutive target images corresponding to the target application according to the recording result.
Optionally, the first feature obtaining module 410 includes:
the first frequency acquisition unit is used for acquiring first expression information of a user operating the target application program and a first touch frequency of a user touch display screen;
the operation interface recording module 420 includes:
the first interface recording unit is used for recording the running interface of the target application program under the condition that the first expression information meets a first preset expression condition and the first touch frequency is greater than a first preset frequency;
the second feature obtaining module 430 includes:
the second frequency acquisition unit is used for acquiring second expression information of the user and a second touch frequency of the user touching the display screen in the recording process;
the interface recording ending module 440 includes:
and the first interface recording ending unit is used for ending recording of the running interface under the condition that the second expression information meets a second preset expression condition and the second touch frequency is less than or equal to the first preset frequency.
Optionally, the method further comprises:
the first input receiving module is used for receiving a first input of the user;
the first image determining module is used for responding to the first input, and determining a first target image to be stored in the target images and a second target image to be deleted in the target images according to input parameters of the first input;
the first condition updating module is used for updating the first preset expression condition according to first expression information corresponding to the first target image and first expression information corresponding to the second target image;
the second condition updating module is used for updating the second preset expression condition according to second expression information corresponding to the first target image and second expression information corresponding to the second target image;
and the preset frequency updating module is used for updating the first preset frequency according to the first touch frequency and the second touch frequency corresponding to the first target image and the first touch frequency and the second touch frequency corresponding to the second target image.
Optionally, the first feature obtaining module 410 includes:
the first audio acquisition unit is used for acquiring first audio information of the target application program in the running process;
the operation interface recording module 420 includes:
the second interface recording unit is used for recording the running interface under the condition that the first audio information meets a first preset audio condition;
the second feature obtaining module 430 includes:
the second audio acquisition unit is used for acquiring second audio information of the target application program in the running process;
the target duration obtaining unit is used for timing from the beginning of recording of the operation interface to obtain target duration;
the interface recording ending module 440 includes:
and the second interface recording ending unit is used for ending the recording of the running interface under the condition that the second audio information meets a second preset audio condition or the target time length reaches a set time length.
The image output device provided by the embodiment of the application acquires first characteristic information associated with a target application program in a foreground running process, records a running interface of the target application program under the condition that the first characteristic information meets a first preset condition, acquires second characteristic information associated with the target application program in the recording process, ends recording the running interface under the condition that the second characteristic information meets a second preset condition, and outputs a plurality of continuous target images corresponding to the target application program according to a recording result. According to the method and the device, the starting point and the end point of recording of the running interface are determined by combining the characteristic information associated with the target application program, the target image can be recorded in the process that a user operates the target application program, the user does not need to make a later-stage motion picture, the operation steps of the user can be reduced, the efficiency of recording the motion picture is improved, and the time of the user is saved.
The image output device in the embodiment of the present application may be a device, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image output apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image output device provided in the embodiment of the present application can implement each process implemented in the method embodiment of fig. 1, and is not described here again to avoid repetition.
Optionally, as shown in fig. 5, an electronic device 500 is further provided in this embodiment of the present application, and includes a processor 501, a memory 502, and a program or an instruction stored in the memory 502 and executable on the processor 501, where the program or the instruction is executed by the processor 501 to implement each process of the above-mentioned embodiment of the image output method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 6 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 600 includes, but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609, a processor 610, and the like.
Those skilled in the art will appreciate that the electronic device 600 may further comprise a power source (e.g., a battery) for supplying power to the various components, and the power source may be logically connected to the processor 610 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 6 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 610 is configured to, in a process of foreground running of a target application, obtain first feature information associated with the target application; recording an operation interface of the target application program under the condition that the first characteristic information meets a first preset condition; acquiring second characteristic information associated with the target application program in the recording process; under the condition that the second characteristic information meets a second preset condition, finishing recording the running interface; and outputting a plurality of continuous target images corresponding to the target application program according to the recording result.
According to the method and the device, the starting point and the end point of recording of the running interface are determined by combining the characteristic information associated with the target application program, the target image can be recorded in the process that a user operates the target application program, the user does not need to make a later-stage motion picture, the operation steps of the user can be reduced, the efficiency of recording the motion picture is improved, and the time of the user is saved.
Optionally, the processor 610 is further configured to obtain first expression information of a user operating the target application program and a first touch frequency at which the user touches the display screen; recording an operation interface of the target application program under the condition that the first expression information meets a first preset expression condition and the first touch frequency is greater than a first preset frequency; in the recording process, second expression information of the user and a second touch frequency of the user touching the display screen are obtained; and when the second expression information meets a second preset expression condition and the second touch frequency is less than or equal to the first preset frequency, ending the recording of the running interface.
A processor 610 further configured to receive a first input by the user; responding to the first input, and determining a first target image to be stored in the target images and a second target image to be deleted in the target images according to input parameters of the first input; updating the first preset expression condition according to first expression information corresponding to the first target image and first expression information corresponding to the second target image; updating the second preset expression condition according to second expression information corresponding to the first target image and second expression information corresponding to the second target image; and updating the first preset frequency according to the first touch frequency and the second touch frequency corresponding to the first target image and the first touch frequency and the second touch frequency corresponding to the second target image.
The processor 610 is further configured to obtain first audio information of the target application program in the running process; recording the running interface under the condition that the first audio information meets a first preset audio condition; acquiring second audio information of the target application program in the running process; or timing from the recording of the operation interface to acquire a target time length; and finishing recording the running interface under the condition that the second audio information meets a second preset audio condition or the target time length reaches a set time length.
According to the method and the device, the moving picture recording process is automatically triggered by combining the expression of the user and the frequency of the touch screen of the user or the audio frequency of the running application program, so that the manual operation of the user can be avoided, the operation steps of the user are reduced, and the experience of the user is improved.
It is to be understood that, in the embodiment of the present application, the input Unit 604 may include a Graphics Processing Unit (GPU) 6041 and a microphone 6042, and the Graphics Processing Unit 6041 processes image data of a still picture or a video obtained by an image capturing apparatus (such as a camera) in a video capturing mode or an image capturing mode. The display unit 606 may include a display panel 6061, and the display panel 6061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 607 includes a touch panel 6071 and other input devices 6072. A touch panel 6071, also referred to as a touch screen. The touch panel 6071 may include two parts of a touch detection device and a touch controller. Other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 609 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 610 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned embodiment of the image output method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and so on.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above-mentioned embodiment of the image output method, and can achieve the same technical effect, and is not described here again to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. An image output method, comprising:
the method comprises the steps that in the process of running a target application program in a foreground, first characteristic information related to the target application program is obtained;
recording an operation interface of the target application program under the condition that the first characteristic information meets a first preset condition;
acquiring second characteristic information associated with the target application program in the recording process;
under the condition that the second characteristic information meets a second preset condition, finishing recording the running interface;
and outputting a plurality of continuous target images corresponding to the target application program according to the recording result.
2. The method of claim 1, wherein obtaining first feature information associated with the target application comprises:
acquiring first expression information of a user operating the target application program and a first touch frequency of a user touch display screen;
recording the running interface of the target application program under the condition that the first characteristic information meets a first preset condition, wherein the recording comprises the following steps:
recording an operation interface of the target application program under the condition that the first expression information meets a first preset expression condition and the first touch frequency is greater than a first preset frequency;
in the recording process, acquiring second characteristic information associated with the target application program includes:
in the recording process, second expression information of the user and a second touch frequency of the user touching the display screen are obtained;
under the condition that the second characteristic information meets a second preset condition, ending the recording of the running interface, wherein the recording comprises the following steps:
and when the second expression information meets a second preset expression condition and the second touch frequency is less than or equal to the first preset frequency, ending the recording of the running interface.
3. The method according to claim 2, further comprising, after the outputting a plurality of consecutive target images corresponding to the target application according to the recording result:
receiving a first input of the user;
responding to the first input, and determining a first target image to be stored in the target images and a second target image to be deleted in the target images according to input parameters of the first input;
updating the first preset expression condition according to first expression information corresponding to the first target image and first expression information corresponding to the second target image;
updating the second preset expression condition according to second expression information corresponding to the first target image and second expression information corresponding to the second target image;
and updating the first preset frequency according to the first touch frequency and the second touch frequency corresponding to the first target image and the first touch frequency and the second touch frequency corresponding to the second target image.
4. The method of claim 1, wherein obtaining first feature information associated with the target application comprises:
acquiring first audio information of the target application program in the running process;
recording the running interface of the target application program under the condition that the first characteristic information meets the preset condition, wherein the recording comprises the following steps:
recording the running interface under the condition that the first audio information meets a first preset audio condition;
in the recording process, acquiring second characteristic information associated with the target application program includes:
acquiring second audio information of the target application program in the running process; or
Timing is carried out from the recording of the operation interface, and the target duration is obtained;
under the condition that the second characteristic information meets a second preset condition, ending the recording of the running interface, wherein the recording comprises the following steps:
and finishing recording the running interface under the condition that the second audio information meets a second preset audio condition or the target time length reaches a set time length.
5. An image output apparatus, characterized by comprising:
the system comprises a first characteristic acquisition module, a second characteristic acquisition module and a first characteristic analysis module, wherein the first characteristic acquisition module is used for acquiring first characteristic information associated with a target application program in the process of foreground running of the target application program;
the running interface recording module is used for recording the running interface of the target application program under the condition that the first characteristic information meets a first preset condition;
the second characteristic acquisition module is used for acquiring second characteristic information associated with the target application program in the recording process;
the interface recording ending module is used for ending recording of the running interface under the condition that the second characteristic information meets a second preset condition;
and the target image output module is used for outputting a plurality of continuous target images corresponding to the target application program according to the recording result.
6. The apparatus of claim 5, wherein the first feature acquisition module comprises:
the first frequency acquisition unit is used for acquiring first expression information of a user operating the target application program and a first touch frequency of a user touch display screen;
the operation interface recording module comprises:
the first interface recording unit is used for recording the running interface of the target application program under the condition that the first expression information meets a first preset expression condition and the first touch frequency is greater than a first preset frequency;
the second feature acquisition module includes:
the second frequency acquisition unit is used for acquiring second expression information of the user and a second touch frequency of the user touching the display screen in the recording process;
the interface recording ending module comprises:
and the first interface recording ending unit is used for ending recording of the running interface under the condition that the second expression information meets a second preset expression condition and the second touch frequency is less than or equal to the first preset frequency.
7. The apparatus of claim 6, further comprising:
the first input receiving module is used for receiving a first input of the user;
the first image determining module is used for responding to the first input, and determining a first target image to be stored in the target images and a second target image to be deleted in the target images according to input parameters of the first input;
the first condition updating module is used for updating the first preset expression condition according to first expression information corresponding to the first target image and first expression information corresponding to the second target image;
the second condition updating module is used for updating the second preset expression condition according to second expression information corresponding to the first target image and second expression information corresponding to the second target image;
and the preset frequency updating module is used for updating the first preset frequency according to the first touch frequency and the second touch frequency corresponding to the first target image and the first touch frequency and the second touch frequency corresponding to the second target image.
8. The apparatus of claim 5, wherein the first feature obtaining module comprises:
the first audio acquisition unit is used for acquiring first audio information of the target application program in the running process;
the operation interface recording module comprises:
the second interface recording unit is used for recording the running interface under the condition that the first audio information meets a first preset audio condition;
the second feature acquisition module includes:
the second audio acquisition unit is used for acquiring second audio information of the target application program in the running process;
the target duration obtaining unit is used for timing from the beginning of recording of the operation interface to obtain target duration;
the interface recording ending module comprises:
and the second interface recording ending unit is used for ending the recording of the running interface under the condition that the second audio information meets a second preset audio condition or the target time length reaches a set time length.
9. An electronic device comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the image output method according to any one of claims 1 to 4.
10. A readable storage medium, characterized in that the readable storage medium stores thereon a program or instructions which, when executed by a processor, implement the steps of the image output method according to any one of claims 1 to 4.
CN202110071255.7A 2021-01-19 2021-01-19 Image output method and device and electronic equipment Pending CN112887782A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110071255.7A CN112887782A (en) 2021-01-19 2021-01-19 Image output method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110071255.7A CN112887782A (en) 2021-01-19 2021-01-19 Image output method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN112887782A true CN112887782A (en) 2021-06-01

Family

ID=76049959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110071255.7A Pending CN112887782A (en) 2021-01-19 2021-01-19 Image output method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112887782A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113676771A (en) * 2021-08-03 2021-11-19 维沃移动通信(杭州)有限公司 Video generation method, device, equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000025878A1 (en) * 1998-10-29 2000-05-11 Sony Computer Entertainment Inc. Information processing device, information processing method, and recording medium
CN105120191A (en) * 2015-07-31 2015-12-02 小米科技有限责任公司 Video recording method and device
CN108509232A (en) * 2018-03-29 2018-09-07 北京小米移动软件有限公司 Screen recording method, device and computer readable storage medium
CN108600669A (en) * 2018-03-30 2018-09-28 努比亚技术有限公司 Game video method for recording, mobile terminal and computer readable storage medium
CN108810437A (en) * 2018-05-28 2018-11-13 努比亚技术有限公司 Record screen method, terminal and computer readable storage medium
CN109446993A (en) * 2018-10-30 2019-03-08 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN109672899A (en) * 2018-12-13 2019-04-23 南京邮电大学 The Wonderful time of object game live scene identifies and prerecording method in real time
CN109819167A (en) * 2019-01-31 2019-05-28 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal
CN109976634A (en) * 2019-03-18 2019-07-05 北京智明星通科技股份有限公司 A kind of game APP screenshot method and equipment
CN111159469A (en) * 2018-11-08 2020-05-15 阿里巴巴集团控股有限公司 User rights object information processing method and device and electronic equipment
CN111352507A (en) * 2020-02-27 2020-06-30 维沃移动通信有限公司 Information prompting method and electronic equipment

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000025878A1 (en) * 1998-10-29 2000-05-11 Sony Computer Entertainment Inc. Information processing device, information processing method, and recording medium
CN105120191A (en) * 2015-07-31 2015-12-02 小米科技有限责任公司 Video recording method and device
CN108509232A (en) * 2018-03-29 2018-09-07 北京小米移动软件有限公司 Screen recording method, device and computer readable storage medium
CN108600669A (en) * 2018-03-30 2018-09-28 努比亚技术有限公司 Game video method for recording, mobile terminal and computer readable storage medium
CN108810437A (en) * 2018-05-28 2018-11-13 努比亚技术有限公司 Record screen method, terminal and computer readable storage medium
CN109446993A (en) * 2018-10-30 2019-03-08 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN111159469A (en) * 2018-11-08 2020-05-15 阿里巴巴集团控股有限公司 User rights object information processing method and device and electronic equipment
CN109672899A (en) * 2018-12-13 2019-04-23 南京邮电大学 The Wonderful time of object game live scene identifies and prerecording method in real time
CN109819167A (en) * 2019-01-31 2019-05-28 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal
CN109976634A (en) * 2019-03-18 2019-07-05 北京智明星通科技股份有限公司 A kind of game APP screenshot method and equipment
CN111352507A (en) * 2020-02-27 2020-06-30 维沃移动通信有限公司 Information prompting method and electronic equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113676771A (en) * 2021-08-03 2021-11-19 维沃移动通信(杭州)有限公司 Video generation method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107632706B (en) Application data processing method and system of multi-modal virtual human
WO2022022536A1 (en) Audio playback method, audio playback apparatus, and electronic device
CN112672061B (en) Video shooting method and device, electronic equipment and medium
CN112887802A (en) Video access method and device
CN113596555B (en) Video playing method and device and electronic equipment
CN113014801B (en) Video recording method, video recording device, electronic equipment and medium
CN111857510B (en) Parameter adjusting method and device and electronic equipment
CN112269505B (en) Audio and video control method and device and electronic equipment
CN112954199A (en) Video recording method and device
CN112188103A (en) Image processing method and device and electronic equipment
CN112532885A (en) Anti-shake method and device and electronic equipment
CN110868632B (en) Video processing method and device, storage medium and electronic equipment
CN106507201A (en) A kind of video playing control method and device
CN113259743A (en) Video playing method and device and electronic equipment
WO2022111458A1 (en) Image capture method and apparatus, electronic device, and storage medium
CN111803960B (en) Method and device for starting preset flow
CN114205447B (en) Shortcut setting method and device of electronic equipment, storage medium and electronic equipment
CN105808231A (en) System and method for recording script and system and method for playing script
CN112887782A (en) Image output method and device and electronic equipment
CN112711368A (en) Operation guidance method and device and electronic equipment
CN112309449A (en) Audio recording method and device
CN113163256B (en) Method and device for generating operation flow file based on video
CN113923392A (en) Video recording method, video recording device and electronic equipment
CN115623268A (en) Interaction method, device, equipment and storage medium based on virtual space
CN111770279B (en) Shooting method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210601

RJ01 Rejection of invention patent application after publication